Re: [ceph-users] rbd mapping error on ubuntu 16.04

2016-05-23 Thread Ilya Dryomov
On Mon, May 23, 2016 at 2:59 PM, Albert Archer  wrote:
>
> Thanks.
> but how to use these features ???
> so,there is no way to implement them on ubuntu 16.04 kernel (4.4.0) ???
> it's strange !!!

What is your use case?  If you are using the kernel client, create your
images with

$ rbd create --size  --image-feature layering 

or add

rbd default features = 3

to ceph.conf on the client side.  (Setting rbd default features on the
OSDs will have no effect.)

These features are nice to have but don't really affect the main I/O
path in the majority of cases.  The image is perfectly usable with them
disabled.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd mapping error on ubuntu 16.04

2016-05-23 Thread Albert Archer
Thanks.
but how to use these features ???
so,there is no way to implement them on ubuntu 16.04 kernel (4.4.0) ???
it's strange !!! 樂

On Mon, May 23, 2016 at 5:28 PM, Albert Archer 
wrote:

> Thanks.
> but how to use these features ???
> so,there is no way to implement them on ubuntu 16.04 kernel (4.4.0) ???
> it's strange !!! 樂
>
> On Mon, May 23, 2016 at 5:24 PM, Ilya Dryomov  wrote:
>
>> On Mon, May 23, 2016 at 12:27 PM, Albert Archer
>>  wrote:
>> > Hello All,
>> > There is a problem to mapping RBD images to ubuntu 16.04(Kernel
>> > 4.4.0-22-generic).
>> > All of the ceph solution is based on ubunut 16.04(deploy, monitors,
>> OSDs and
>> > Clients).
>> > #
>> > #
>> >
>> > there are some output of my config :
>> >
>> > $ ceph status
>> >
>> > cluster 8f2da78c-89e9-4924-9238-5bf9110664cd
>> >  health HEALTH_OK
>> >  monmap e1: 3 mons at
>> > {mon1=
>> 192.168.0.52:6789/0,mon2=192.168.0.53:6789/0,mon3=192.168.0.54:6789/0}
>> > election epoch 4, quorum 0,1,2 mon1,mon2,mon3
>> >  osdmap e50: 9 osds: 9 up, 9 in
>> > flags sortbitwise
>> >   pgmap v183: 576 pgs, 2 pools, 306 bytes data, 4 objects
>> > 324 MB used, 899 GB / 899 GB avail
>> >  576 active+clean
>> >
>> > when i want to map testpool/vdisk1(for example) :
>> >
>> >  $ sudo rbd map testpool/vdisk1
>> >
>> >
>> > rbd: sysfs write failed
>> > RBD image feature set mismatch. You can disable features unsupported by
>> the
>> > kernel with "rbd feature disable".
>> > In some cases useful info is found in syslog - try "dmesg | tail" or so.
>> > rbd: map failed: (6) No such device or address
>> >
>> > $ rbd info testpool/vdisk1
>> >
>> > rbd image 'vdisk1':
>> > size 4096 MB in 1024 objects
>> > order 22 (4096 kB objects)
>> > block_name_prefix: rbd_data.1064238e1f29
>> > format: 2
>> > features: layering, exclusive-lock, object-map, fast-diff,
>> > deep-flatten
>> > flags:
>> > 
>> > 
>> >
>> > But, when i disabled following features, finally i could map my
>> > image(vdisk1).
>> >
>> > exclusive-lock, object-map, fast-diff, deep-flatten
>> >
>> > So, what is the problem ???
>>
>> No problem - those features aren't yet supported by the kernel client.
>>
>> Thanks,
>>
>> Ilya
>>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd mapping error on ubuntu 16.04

2016-05-23 Thread Ilya Dryomov
On Mon, May 23, 2016 at 12:27 PM, Albert Archer
 wrote:
> Hello All,
> There is a problem to mapping RBD images to ubuntu 16.04(Kernel
> 4.4.0-22-generic).
> All of the ceph solution is based on ubunut 16.04(deploy, monitors, OSDs and
> Clients).
> #
> #
>
> there are some output of my config :
>
> $ ceph status
>
> cluster 8f2da78c-89e9-4924-9238-5bf9110664cd
>  health HEALTH_OK
>  monmap e1: 3 mons at
> {mon1=192.168.0.52:6789/0,mon2=192.168.0.53:6789/0,mon3=192.168.0.54:6789/0}
> election epoch 4, quorum 0,1,2 mon1,mon2,mon3
>  osdmap e50: 9 osds: 9 up, 9 in
> flags sortbitwise
>   pgmap v183: 576 pgs, 2 pools, 306 bytes data, 4 objects
> 324 MB used, 899 GB / 899 GB avail
>  576 active+clean
>
> when i want to map testpool/vdisk1(for example) :
>
>  $ sudo rbd map testpool/vdisk1
>
>
> rbd: sysfs write failed
> RBD image feature set mismatch. You can disable features unsupported by the
> kernel with "rbd feature disable".
> In some cases useful info is found in syslog - try "dmesg | tail" or so.
> rbd: map failed: (6) No such device or address
>
> $ rbd info testpool/vdisk1
>
> rbd image 'vdisk1':
> size 4096 MB in 1024 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.1064238e1f29
> format: 2
> features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
> flags:
> 
> 
>
> But, when i disabled following features, finally i could map my
> image(vdisk1).
>
> exclusive-lock, object-map, fast-diff, deep-flatten
>
> So, what is the problem ???

No problem - those features aren't yet supported by the kernel client.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd mapping failes

2013-08-30 Thread Bernhard Glomm
mount cephfs failes to (I added 3 MDS)
anybody any ideas how to debug this further?

I used ceph-deploy to create the cluster,
the xfs filesystem on the OSD's is okay, I can copy remove and open files on 
that partition
so I asume it's something inside of ceph???

TIA

Bernhard

P.S.: Version is 
ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f) on 
uptodate raring

--
root@nuke36[/1]:/etc/ceph # ceph -s
2013-08-30 15:03:18.454701 7f3b7cd18700  1 -- :/0 messenger.start
2013-08-30 15:03:18.455460 7f3b7cd18700  1 -- :/1003684 -- 
192.168.242.92:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0 
0x7f3b7800e8f0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.456412 7f3b7c517700  1 -- 192.168.242.36:0/1003684 learned 
my addr 192.168.242.36:0/1003684
2013-08-30 15:03:18.458069 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 == 
mon.3 192.168.242.92:6789/0 1  mon_map v1  776+0+0 (3609201999 0 0) 
0x7f3b6c000c30 con 0x7f3b7800e4e0
2013-08-30 15:03:18.458308 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 == 
mon.3 192.168.242.92:6789/0 2  auth_reply(proto 2 0 Success) v1  33+0+0 
(345113272 0 0) 0x7f3b6c0008f0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.458612 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 -- 
192.168.242.92:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0 
0x7f3b60001af0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.459532 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 == 
mon.3 192.168.242.92:6789/0 3  auth_reply(proto 2 0 Success) v1  
206+0+0 (1084599267 0 0) 0x7f3b6c0008f0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.459816 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 -- 
192.168.242.92:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- ?+0 
0x7f3b600020d0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.460739 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 == 
mon.3 192.168.242.92:6789/0 4  auth_reply(proto 2 0 Success) v1  
393+0+0 (496062897 0 0) 0x7f3b6c0008f0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.460844 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 -- 
192.168.242.92:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0 0x7f3b7800ed80 
con 0x7f3b7800e4e0
2013-08-30 15:03:18.461118 7f3b7cd18700  1 -- 192.168.242.36:0/1003684 -- 
192.168.242.92:6789/0 -- mon_subscribe({monmap=2+,osdmap=0}) v2 -- ?+0 
0x7f3b780079f0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.461138 7f3b7cd18700  1 -- 192.168.242.36:0/1003684 -- 
192.168.242.92:6789/0 -- mon_subscribe({monmap=2+,osdmap=0}) v2 -- ?+0 
0x7f3b7800fa10 con 0x7f3b7800e4e0
2013-08-30 15:03:18.461813 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 == 
mon.3 192.168.242.92:6789/0 5  mon_map v1  776+0+0 (3609201999 0 0) 
0x7f3b6c0008f0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.462016 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 == 
mon.3 192.168.242.92:6789/0 6  mon_subscribe_ack(300s) v1  20+0+0 
(3156621930 0 0) 0x7f3b6c001340 con 0x7f3b7800e4e0
2013-08-30 15:03:18.463931 7f3b7cd18700  1 -- 192.168.242.36:0/1003684 -- 
192.168.242.92:6789/0 -- mon_command({prefix: get_command_descriptions} v 
0) v1 -- ?+0 0x7f3b7800b0f0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.463966 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 == 
mon.3 192.168.242.92:6789/0 7  osd_map(34..34 src has 1..34) v3  
2483+0+0 (453205619 0 0) 0x7f3b6c0008c0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.464694 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 == 
mon.3 192.168.242.92:6789/0 8  mon_subscribe_ack(300s) v1  20+0+0 
(3156621930 0 0) 0x7f3b6c0010e0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.464749 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 == 
mon.3 192.168.242.92:6789/0 9  osd_map(34..34 src has 1..34) v3  
2483+0+0 (453205619 0 0) 0x7f3b6c002720 con 0x7f3b7800e4e0
2013-08-30 15:03:18.464765 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 == 
mon.3 192.168.242.92:6789/0 10  mon_subscribe_ack(300s) v1  20+0+0 
(3156621930 0 0) 0x7f3b6c002b20 con 0x7f3b7800e4e0
2013-08-30 15:03:18.468276 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 == 
mon.3 192.168.242.92:6789/0 11  mon_command_ack([{prefix: 
get_command_descriptions}]=0  v0) v1  72+0+24040 (1092875540 0 
2922658865) 0x7f3b6c002720 con 0x7f3b7800e4e0
2013-08-30 15:03:18.510756 7f3b7cd18700  1 -- 192.168.242.36:0/1003684 -- 
192.168.242.92:6789/0 -- mon_command({prefix: status} v 0) v1 -- ?+0 
0x7f3b7800b0d0 con 0x7f3b7800e4e0
2013-08-30 15:03:18.512490 7f3b76ffd700  1 -- 192.168.242.36:0/1003684 == 
mon.3 192.168.242.92:6789/0 12  mon_command_ack([{prefix: status}]=0  
v0) v1  54+0+497 (1155462804 0 3461792647) 0x7f3b6c001080 con 0x7f3b7800e4e0
  cluster f57cdca3-7222-4095-853b-03727461f725
   health HEALTH_OK
   monmap e1: 5 mons at 
{atom01=192.168.242.31:6789/0,atom02=192.168.242.32:6789/0,nuke36=192.168.242.36:6789/0,ping=192.168.242.92:6789/0,pong=192.168.242.93:6789/0},
 election epoch 42, quorum 0,1,2,3,4 atom01,atom02,nuke36,ping,pong
   osdmap e34: 2 osds: 2 up, 2 in
    pgmap v367: 1192 pgs: 1192 active+clean; 9788 bytes data, 94460 KB used, 
3722 GB / 3722 GB avail
   mdsmap e17: 

Re: [ceph-users] rbd mapping failes

2013-08-30 Thread Sage Weil
Hi Bernhard,

On Fri, 30 Ag 2013, Bernhard Glomm wrote:
 Hi all,
 
 due to a problem with ceph-deploy I currently use
 
 deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/wip-4924/
 raring main
 (ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f))
 
 Now the initialization of the cluster works like a charm,
 ceph health is okay,

Great; this will get backported to dumpling shortly and will be included 
in teh 0.67.3 release.

 just the mapping of the created rbd is failing.
 
 -
 root@ping[/1]:~ # ceph osd pool delete kvm-pool kvm-pool
 --yes-i-really-really-mean-it
 pool 'kvm-pool' deleted
 root@ping[/1]:~ # ceph osd lspools
 
 0 data,1 metadata,2 rbd,
 root@ping[/1]:~ #
 root@ping[/1]:~ # ceph osd pool create kvm-pool 1000
 pool 'kvm-pool' created
 root@ping[/1]:~ # ceph osd lspools
 0 data,1 metadata,2 rbd,4 kvm-pool,
 root@ping[/1]:~ # ceph osd pool set kvm-pool min_size 2
 set pool 4 min_size to 2
 root@ping[/1]:~ # ceph osd dump | grep 'rep size'
 pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins
 pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
 pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins
 pg_num 64 pgp_num 64 last_change 1 owner 0
 pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins
 pg_num 64 pgp_num 64 last_change 1 owner 0
 pool 4 'kvm-pool' rep size 2 min_size 2 crush_ruleset 0 object_hash rjenkins
 pg_num 1000 pgp_num 1000 last_change 33 owner 0
 root@ping[/1]:~ # rbd create atom03.cimg --size 4000 --pool kvm-pool
 root@ping[/1]:~ # rbd create atom04.cimg --size 4000 --pool kvm-pool
 root@ping[/1]:~ # rbd ls kvm-pool
 atom03.cimg
 atom04.cimg
 root@ping[/1]:~ # rbd --image atom03.cimg --pool kvm-pool info
 rbd image 'atom03.cimg':
     size 4000 MB in 1000 objects
     order 22 (4096 KB objects)
     block_name_prefix: rb.0.114d.2ae8944a
     format: 1
 root@ping[/1]:~ # rbd --image atom04.cimg --pool kvm-pool info
 rbd image 'atom04.cimg':
     size 4000 MB in 1000 objects
     order 22 (4096 KB objects)
     block_name_prefix: rb.0.127d.74b0dc51
     format: 1
 root@ping[/1]:~ # rbd map atom03.cimg --pool kvm-pool --id admin
 rbd: '/sbin/udevadm settle' failed! (256)
 root@ping[/1]:~ # rbd map --pool kvm-pool --image atom03.cimg --id admin
 --keyring /etc/ceph/ceph.client.admin.keyring
 ^Crbd: '/sbin/udevadm settle' failed! (2)
 root@ping[/1]:~ # rbd map kvm-pool/atom03.cimg --id admin --keyring
 /etc/ceph/ceph.client.admin.keyring
 rbd: '/sbin/udevadm settle' failed! (256)
 -

What happens if you run '/sbin/udevadm settle' from the command line?

Also, this the very last step before rbd exits (normally with success), so 
my guess is that the rbd mapping actually succeeded.  cat /proc/partitions 
or ls /dev/rbd

sage

 
 Do I miss something?
 But I think this set of commands worked perfectly with cuttlefish?
 
 TIA
 
 Bernhard
 
 --
 
 
 Bernhard Glomm
 IT Administration
 
 Phone:
 +49 (30) 86880 134
 Fax:
 +49 (30) 86880 100
 Skype:
 bernhard.glomm.ecologic
 Website: | Video: | Newsletter: | Facebook: | Linkedin: | Twitter: |
 YouTube: | Google+:
 Ecologic Institut gemeinn?tzige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin |
 Germany
 GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
 DE811963464
 Ecologic? is a Trade Mark (TM) of Ecologic Institut gemeinn?tzige GmbH
 
 
 
 ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd mapping failes - maybe solved

2013-08-30 Thread bernhard glomm
Thanks Sage,

I just tried various versions from gitbuilder and finally found one that worked 
;-)

deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/dumpling/   
raring main

looks like it works perfectly, on first glance with much better performance 
than cuttlefish.

Do you need some test for my problem with 0.67.2-16-gd41cf86?
could do so on monday.

I didn't ran udev nor cat /proc/partitions but checked 
/dev/rbd* - not present
and 
tree /dev/disk
also not showing any hint for a new device other than my hard disk partitions

Since the dumpling version now seems to work I would otherwise keep using that
to get more familiar with ceph.

Bernhard

 Bernhard Glomm
IT Administration

Phone:   +49 (30) 86880 134
Fax: +49 (30) 86880 100
Skype:   bernhard.glomm.ecologic
   
Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | 
Germany
GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.: 
DE811963464
Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

On Aug 30, 2013, at 5:05 PM, Sage Weil s...@inktank.com wrote:

 Hi Bernhard,
 
 On Fri, 30 Ag 2013, Bernhard Glomm wrote:
 Hi all,
 
 due to a problem with ceph-deploy I currently use
 
 deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/wip-4924/
 raring main
 (ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f))
 
 Now the initialization of the cluster works like a charm,
 ceph health is okay,
 
 Great; this will get backported to dumpling shortly and will be included 
 in teh 0.67.3 release.
 
 just the mapping of the created rbd is failing.
 
 -
 root@ping[/1]:~ # ceph osd pool delete kvm-pool kvm-pool
 --yes-i-really-really-mean-it
 pool 'kvm-pool' deleted
 root@ping[/1]:~ # ceph osd lspools
 
 0 data,1 metadata,2 rbd,
 root@ping[/1]:~ #
 root@ping[/1]:~ # ceph osd pool create kvm-pool 1000
 pool 'kvm-pool' created
 root@ping[/1]:~ # ceph osd lspools
 0 data,1 metadata,2 rbd,4 kvm-pool,
 root@ping[/1]:~ # ceph osd pool set kvm-pool min_size 2
 set pool 4 min_size to 2
 root@ping[/1]:~ # ceph osd dump | grep 'rep size'
 pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins
 pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
 pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins
 pg_num 64 pgp_num 64 last_change 1 owner 0
 pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins
 pg_num 64 pgp_num 64 last_change 1 owner 0
 pool 4 'kvm-pool' rep size 2 min_size 2 crush_ruleset 0 object_hash rjenkins
 pg_num 1000 pgp_num 1000 last_change 33 owner 0
 root@ping[/1]:~ # rbd create atom03.cimg --size 4000 --pool kvm-pool
 root@ping[/1]:~ # rbd create atom04.cimg --size 4000 --pool kvm-pool
 root@ping[/1]:~ # rbd ls kvm-pool
 atom03.cimg
 atom04.cimg
 root@ping[/1]:~ # rbd --image atom03.cimg --pool kvm-pool info
 rbd image 'atom03.cimg':
 size 4000 MB in 1000 objects
 order 22 (4096 KB objects)
 block_name_prefix: rb.0.114d.2ae8944a
 format: 1
 root@ping[/1]:~ # rbd --image atom04.cimg --pool kvm-pool info
 rbd image 'atom04.cimg':
 size 4000 MB in 1000 objects
 order 22 (4096 KB objects)
 block_name_prefix: rb.0.127d.74b0dc51
 format: 1
 root@ping[/1]:~ # rbd map atom03.cimg --pool kvm-pool --id admin
 rbd: '/sbin/udevadm settle' failed! (256)
 root@ping[/1]:~ # rbd map --pool kvm-pool --image atom03.cimg --id admin
 --keyring /etc/ceph/ceph.client.admin.keyring
 ^Crbd: '/sbin/udevadm settle' failed! (2)
 root@ping[/1]:~ # rbd map kvm-pool/atom03.cimg --id admin --keyring
 /etc/ceph/ceph.client.admin.keyring
 rbd: '/sbin/udevadm settle' failed! (256)
 -
 
 What happens if you run '/sbin/udevadm settle' from the command line?
 
 Also, this the very last step before rbd exits (normally with success), so 
 my guess is that the rbd mapping actually succeeded.  cat /proc/partitions 
 or ls /dev/rbd
 
 sage
 
 
 Do I miss something?
 But I think this set of commands worked perfectly with cuttlefish?
 
 TIA
 
 Bernhard
 
 --
 
 
 Bernhard Glomm
 IT Administration
 
 Phone:
 +49 (30) 86880 134
 Fax:
 +49 (30) 86880 100
 Skype:
 bernhard.glomm.ecologic
 Website: | Video: | Newsletter: | Facebook: | Linkedin: | Twitter: |
 YouTube: | Google+:
 Ecologic Institut gemeinn?tzige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin 
 |
 Germany
 GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
 DE811963464
 Ecologic? is a Trade Mark (TM) of Ecologic Institut gemeinn?tzige GmbH
 
 
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
ceph-users mailing list
ceph-users@lists.ceph.com

Re: [ceph-users] RBD Mapping

2013-07-23 Thread Wido den Hollander

On 07/23/2013 09:09 PM, Gaylord Holder wrote:

Is it possible to find out which machines are mapping and RBD?


No, that is stateless. You can use locking however, you can for example 
put the hostname of the machine in the lock.


But that's not mandatory in the protocol.

Maybe you are able to list watchers for a RBD drive, but I'm not sure 
about that.




-Gaylord
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD Mapping

2013-07-23 Thread Gregory Farnum
On Tue, Jul 23, 2013 at 1:28 PM, Wido den Hollander w...@42on.com wrote:
 On 07/23/2013 09:09 PM, Gaylord Holder wrote:

 Is it possible to find out which machines are mapping and RBD?


 No, that is stateless. You can use locking however, you can for example put
 the hostname of the machine in the lock.

 But that's not mandatory in the protocol.

 Maybe you are able to list watchers for a RBD drive, but I'm not sure about
 that.

You can. rados listwatchers object will tell you who's got watches
registered, and that output should include IPs. You'll want to run it
against the rbd head object.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com





 -Gaylord
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 --
 Wido den Hollander
 42on B.V.

 Phone: +31 (0)20 700 9902
 Skype: contact42on

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD Mapping

2013-07-23 Thread Sebastien Han
Hi Greg,Just tried the list watchers, on a rbd with the QEMU driver and I got:root@ceph:~# rados -p volumes listwatchers rbd_header.789c2ae8944awatcher=client.30882 cookie=1I also tried with the kernel module but didn't see anything…No IP addresses anywhere… :/, any idea?Nice tip btw :)Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone :+33 (0)1 49 70 99 72–Mobile :+33 (0)6 52 84 44 70Email :sebastien@enovance.com–Skype :han.sbastienAddress :10, rue de la Victoire – 75009 ParisWeb :www.enovance.com–Twitter :@enovance

On Jul 23, 2013, at 11:01 PM, Gregory Farnum g...@inktank.com wrote:On Tue, Jul 23, 2013 at 1:28 PM, Wido den Hollander w...@42on.com wrote:On 07/23/2013 09:09 PM, Gaylord Holder wrote:Is it possible to find out which machines are mapping and RBD?No, that is stateless. You can use locking however, you can for example putthe hostname of the machine in the lock.But that's not mandatory in the protocol.Maybe you are able to list watchers for a RBD drive, but I'm not sure aboutthat.You can. "rados listwatchers object" will tell you who's got watchesregistered, and that output should include IPs. You'll want to run itagainst the rbd head object.-GregSoftware Engineer #42 @ http://inktank.com | http://ceph.com-Gaylord___ceph-users mailing listceph-users@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com--Wido den Hollander42on B.V.Phone: +31 (0)20 700 9902Skype: contact42on___ceph-users mailing listceph-users@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com___ceph-users mailing listceph-users@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD Mapping

2013-07-23 Thread Sebastien Han
Arf no worries. Even after a quick dive into the logs, I haven't find anything. (default log level).Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone :+33 (0)1 49 70 99 72–Mobile :+33 (0)6 52 84 44 70Email :sebastien@enovance.com–Skype :han.sbastienAddress :10, rue de la Victoire – 75009 ParisWeb :www.enovance.com–Twitter :@enovance

On Jul 24, 2013, at 12:08 AM, Gregory Farnum g...@inktank.com wrote:On Tue, Jul 23, 2013 at 2:55 PM, Sebastien Hansebastien@enovance.com wrote:Hi Greg,Just tried the list watchers, on a rbd with the QEMU driver and I got:root@ceph:~# rados -p volumes listwatchers rbd_header.789c2ae8944awatcher=client.30882 cookie=1I also tried with the kernel module but didn't see anything…No IP addresses anywhere… :/, any idea?Nice tip btw :)Oh, whoops. Looks like the first iteration didn't include IPaddresses; they show up in version 0.65 or later. Sorry for theinconvenience. I think there might be a way to convert client IDs intoaddresses but I can't quite think of any convenient ones (as opposedto inconvenient ones like digging them up out of logs); maybe somebodyelse has an idea...-GregSoftware Engineer #42 @ http://inktank.com | http://ceph.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com