Re: [ceph-users] Ceph Block Device

2015-02-17 Thread Brad Hubbard

On 02/18/2015 11:48 AM, Garg, Pankaj wrote:

libkmod: ERROR ../libkmod/libkmod.c:556 kmod_search_moddep: could not open 
moddep file


Try sudo moddep and then running your modprobe again.

This seems more like an OS issue than a Ceph specific issue.

Cheers,
Brad
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Block Device

2015-02-17 Thread Garg, Pankaj
Hi,
I have a Ceph cluster and I am trying to create a block device. I execute the 
following command, and get errors:


è sudo rbd map cephblockimage --pool rbd -k /etc/ceph/ceph.client.admin.keyring
libkmod: ERROR ../libkmod/libkmod.c:556 kmod_search_moddep: could not open 
moddep file '/lib/modules/3.18.0-02094-gab62ac9/modules.dep.bin'
modinfo: ERROR: Module alias rbd not found.
modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open 
moddep file '/lib/modules/3.18.0-02094-gab62ac9/modules.dep.bin'
rbd: modprobe rbd failed! (256)


Need help with what is wrong. I installed the Ceph package on the machine where 
I execute the command. This is on ARM BTW.  Is there something I am missing?
I am able to run Object storage and rados bench just fine on the cluster.


Thanks
Pankaj
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Block Device

2015-02-17 Thread Garg, Pankaj
Hi Brad,

This is Ubuntu 14.04, running on ARM.
/lib/modules/3.18.0-02094-gab62ac9/modules.dep.bin doesn't exist. 
Rmmod rbd command says rmmod: ERROR: Module rbd is not currently loaded.

Running as Root doesn't make any difference. I was running as sudo anyway.

Thanks
Pankaj

-Original Message-
From: Brad Hubbard [mailto:bhubb...@redhat.com] 
Sent: Tuesday, February 17, 2015 5:06 PM
To: Garg, Pankaj; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph Block Device

On 02/18/2015 09:56 AM, Garg, Pankaj wrote:
 Hi,

 I have a Ceph cluster and I am trying to create a block device. I execute the 
 following command, and get errors:

 èsudo rbd map cephblockimage --pool rbd -k /etc/ceph/ceph.client.admin.keyring

 libkmod: ERROR ../libkmod/libkmod.c:556 kmod_search_moddep: could not open 
 moddep file '/lib/modules/3.18.0-02094-gab62ac9/modules.dep.bin'

 modinfo: ERROR: Module alias rbd not found.

 modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open 
 moddep file '/lib/modules/3.18.0-02094-gab62ac9/modules.dep.bin'

 rbd: modprobe rbd failed! (256)

What distro/release is this?

Does /lib/modules/3.18.0-02094-gab62ac9/modules.dep.bin exist?

Can you run the command as root?


 Need help with what is wrong. I installed the Ceph package on the machine 
 where I execute the command. This is on ARM BTW.  Is there something I am 
 missing?

 I am able to run Object storage and rados bench just fine on the cluster.

 Thanks

 Pankaj



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 


Kindest Regards,

Brad Hubbard
Senior Software Maintenance Engineer
Red Hat Global Support Services
Asia Pacific Region
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Block Device

2015-02-17 Thread Brad Hubbard

On 02/18/2015 09:56 AM, Garg, Pankaj wrote:

Hi,

I have a Ceph cluster and I am trying to create a block device. I execute the 
following command, and get errors:

èsudo rbd map cephblockimage --pool rbd -k /etc/ceph/ceph.client.admin.keyring

libkmod: ERROR ../libkmod/libkmod.c:556 kmod_search_moddep: could not open 
moddep file '/lib/modules/3.18.0-02094-gab62ac9/modules.dep.bin'

modinfo: ERROR: Module alias rbd not found.

modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open 
moddep file '/lib/modules/3.18.0-02094-gab62ac9/modules.dep.bin'

rbd: modprobe rbd failed! (256)


What distro/release is this?

Does /lib/modules/3.18.0-02094-gab62ac9/modules.dep.bin exist?

Can you run the command as root?



Need help with what is wrong. I installed the Ceph package on the machine where 
I execute the command. This is on ARM BTW.  Is there something I am missing?

I am able to run Object storage and rados bench just fine on the cluster.

Thanks

Pankaj



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--


Kindest Regards,

Brad Hubbard
Senior Software Maintenance Engineer
Red Hat Global Support Services
Asia Pacific Region
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-18 Thread Travis Rhoden
One question re: discard support for kRBD -- does it matter which format
the RBD is?  Format 1 and Format 2 are okay, or just for Format 2?

 - Travis

On Mon, Dec 15, 2014 at 8:58 AM, Max Power 
mailli...@ferienwohnung-altenbeken.de wrote:

  Ilya Dryomov ilya.dryo...@inktank.com hat am 12. Dezember 2014 um
 18:00
  geschrieben:
  Just a note, discard support went into 3.18, which was released a few
  days ago.

 I recently compiled 3.18 on Debian 7 and what do I have to say... It works
 perfectly well. The used memory goes up and down again. So I think this
 will be
 my choice. Thank you!
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-18 Thread Josh Durgin

On 12/18/2014 10:49 AM, Travis Rhoden wrote:

One question re: discard support for kRBD -- does it matter which format
the RBD is?  Format 1 and Format 2 are okay, or just for Format 2?


It shouldn't matter which format you use.

Josh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-18 Thread Adeel Nazir
Discard is supported in kernel 3.18 rc1 or greater as per 
https://lkml.org/lkml/2014/10/14/450


 -Original Message-
 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
 Robert Sander
 Sent: Friday, December 12, 2014 7:01 AM
 To: ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] Ceph Block device and Trim/Discard
 
 On 12.12.2014 12:48, Max Power wrote:
 
  It would be great to shrink the used space. Is there a way to achieve
  this? Or have I done something wrong? In a professional environment
  you may can live with filesystems that only grow. But on my small
  home-cluster this really is a problem.
 
 As Wido already mentioned the kernel RBD does not support discard.
 
 When using qemu+rbd you cannot use the virto driver as this also does not
 support discard. My best experience is with the virtual SATA driver and the
 options cache=writeback and discard=on.
 
 Regards
 --
 Robert Sander
 Heinlein Support GmbH
 Schwedter Str. 8/9b, 10119 Berlin
 
 http://www.heinlein-support.de
 
 Tel: 030 / 405051-43
 Fax: 030 / 405051-19
 
 Zwangsangaben lt. §35a GmbHG:
 HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
 Geschäftsführer: Peer Heinlein -- Sitz: Berlin

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-15 Thread Max Power
 Ilya Dryomov ilya.dryo...@inktank.com hat am 12. Dezember 2014 um 18:00
 geschrieben:
 Just a note, discard support went into 3.18, which was released a few
 days ago.

I recently compiled 3.18 on Debian 7 and what do I have to say... It works
perfectly well. The used memory goes up and down again. So I think this will be
my choice. Thank you!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Max Power
I am new to Ceph and start discovering its features. I used ext4 partitions
(also mounted with -o discard) to place several osd on them. Then I created an
erasure coded pool in this cluster. On top of this there is the rados block
device which holds also an ext4 filesystem (of course mounted with -o discard).

I started to create some random 1MB files with tempfile and /dev/urandom to mess
the filesystem up and use all available space (256MB on my testdrive). After
this I deleted everything again. To my surprise the discard-feature did not
work. Ceph reports that ~256MB are used for data (after mkfs it was around 0MB).
I also tried to use 'fstrim' on the mountpoint but it reports that the discard
operation is not supported. But why?

It would be great to shrink the used space. Is there a way to achieve this? Or
have I done something wrong? In a professional environment you may can live with
filesystems that only grow. But on my small home-cluster this really is a
problem.

Greetings from Germany!

P.S.: I use ceph version 0.80.7 as delivered with debian jessie.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Wido den Hollander
On 12/12/2014 12:48 PM, Max Power wrote:
 I am new to Ceph and start discovering its features. I used ext4 partitions
 (also mounted with -o discard) to place several osd on them. Then I created an
 erasure coded pool in this cluster. On top of this there is the rados block
 device which holds also an ext4 filesystem (of course mounted with -o 
 discard).
 

How are you using Ceph, Kernel RBD, Qemu/KVM?

 I started to create some random 1MB files with tempfile and /dev/urandom to 
 mess
 the filesystem up and use all available space (256MB on my testdrive). After
 this I deleted everything again. To my surprise the discard-feature did not
 work. Ceph reports that ~256MB are used for data (after mkfs it was around 
 0MB).
 I also tried to use 'fstrim' on the mountpoint but it reports that the 
 discard
 operation is not supported. But why?
 

It depends. Kernel RBD does not support discard/trim yet. Qemu does
under certain situations and with special configuration.

A search should tell you the parameters.

 It would be great to shrink the used space. Is there a way to achieve this? Or
 have I done something wrong? In a professional environment you may can live 
 with
 filesystems that only grow. But on my small home-cluster this really is a
 problem.
 
 Greetings from Germany!
 
 P.S.: I use ceph version 0.80.7 as delivered with debian jessie.
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 


-- 
Wido den Hollander
Ceph consultant and trainer
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Max Power
 Wido den Hollander w...@42on.com hat am 12. Dezember 2014 um 12:53
 geschrieben:
 It depends. Kernel RBD does not support discard/trim yet. Qemu does
 under certain situations and with special configuration.

Ah, Thank you. So this is my problem. I use rbd with the kernel modules. I think
I should port my fileserver to qemu/kvm environment then and hope that it is
safe to have a big qemu-partition with around 10 TB.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Sebastien Han
Discard works with virtio-scsi controllers for disks in QEMU.
Just use discard=unmap in the disk section (scsi disk).


 On 12 Dec 2014, at 13:17, Max Power mailli...@ferienwohnung-altenbeken.de 
 wrote:
 
 Wido den Hollander w...@42on.com hat am 12. Dezember 2014 um 12:53
 geschrieben:
 It depends. Kernel RBD does not support discard/trim yet. Qemu does
 under certain situations and with special configuration.
 
 Ah, Thank you. So this is my problem. I use rbd with the kernel modules. I 
 think
 I should port my fileserver to qemu/kvm environment then and hope that it is
 safe to have a big qemu-partition with around 10 TB.
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Cheers.

Sébastien Han
Cloud Architect

Always give 100%. Unless you're giving blood.

Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Ilya Dryomov
On Fri, Dec 12, 2014 at 2:53 PM, Wido den Hollander w...@42on.com wrote:
 On 12/12/2014 12:48 PM, Max Power wrote:
 I am new to Ceph and start discovering its features. I used ext4 partitions
 (also mounted with -o discard) to place several osd on them. Then I created 
 an
 erasure coded pool in this cluster. On top of this there is the rados block
 device which holds also an ext4 filesystem (of course mounted with -o 
 discard).


 How are you using Ceph, Kernel RBD, Qemu/KVM?

 I started to create some random 1MB files with tempfile and /dev/urandom to 
 mess
 the filesystem up and use all available space (256MB on my testdrive). After
 this I deleted everything again. To my surprise the discard-feature did not
 work. Ceph reports that ~256MB are used for data (after mkfs it was around 
 0MB).
 I also tried to use 'fstrim' on the mountpoint but it reports that the 
 discard
 operation is not supported. But why?


 It depends. Kernel RBD does not support discard/trim yet. Qemu does

Just a note, discard support went into 3.18, which was released a few
days ago.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Robert Sander
On 12.12.2014 12:48, Max Power wrote:

 It would be great to shrink the used space. Is there a way to achieve this? Or
 have I done something wrong? In a professional environment you may can live 
 with
 filesystems that only grow. But on my small home-cluster this really is a
 problem.

As Wido already mentioned the kernel RBD does not support discard.

When using qemu+rbd you cannot use the virto driver as this also does
not support discard. My best experience is with the virtual SATA driver
and the options cache=writeback and discard=on.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Wido den Hollander
On 12/12/2014 01:17 PM, Max Power wrote:
 Wido den Hollander w...@42on.com hat am 12. Dezember 2014 um 12:53
 geschrieben:
 It depends. Kernel RBD does not support discard/trim yet. Qemu does
 under certain situations and with special configuration.
 
 Ah, Thank you. So this is my problem. I use rbd with the kernel modules. I 
 think
 I should port my fileserver to qemu/kvm environment then and hope that it is
 safe to have a big qemu-partition with around 10 TB.

Regarding discard in Kernel RBD: http://tracker.ceph.com/issues/190

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 


-- 
Wido den Hollander
Ceph consultant and trainer
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph block device IO seems slow? Did i got something wrong?

2014-03-16 Thread duan . xufeng
Hi,
I attached one 500G block device to the vm, and test it in vm use 
dd if=/dev/zero of=myfile bs=1M count=1024 ,
Then I got a average io speed about 31MB/s. I thought that i 
should have got 100MB/s,
cause my vm hypervisor has 1G nic and osd host has 10G nic。
Did i got a wrong result? how can i make it faster?

Your sincerely,

Michael.




[root@storage1 ~]# ceph -w
2014-03-16 17:24:44.596903 mon.0 [INF] pgmap v2245: 1127 pgs: 1127 
active+clean; 8758 MB data, 100 GB used, 27749 GB / 29340 GB avail; 5059 
kB/s wr, 1 op/s
2014-03-16 17:24:45.742589 mon.0 [INF] pgmap v2246: 1127 pgs: 1127 
active+clean; 8826 MB data, 100 GB used, 27749 GB / 29340 GB avail; 21390 
kB/s wr, 7 op/s
2014-03-16 17:24:46.864936 mon.0 [INF] pgmap v2247: 1127 pgs: 1127 
active+clean; 8838 MB data, 100 GB used, 27749 GB / 29340 GB avail; 36789 
kB/s wr, 13 op/s
2014-03-16 17:24:49.578711 mon.0 [INF] pgmap v2248: 1127 pgs: 1127 
active+clean; 8869 MB data, 100 GB used, 27749 GB / 29340 GB avail; 11404 
kB/s wr, 3 op/s
2014-03-16 17:24:50.824619 mon.0 [INF] pgmap v2249: 1127 pgs: 1127 
active+clean; 8928 MB data, 100 GB used, 27749 GB / 29340 GB avail; 22972 
kB/s wr, 7 op/s
2014-03-16 17:24:51.980126 mon.0 [INF] pgmap v2250: 1127 pgs: 1127 
active+clean; 8933 MB data, 100 GB used, 27749 GB / 29340 GB avail; 28408 
kB/s wr, 10 op/s
2014-03-16 17:24:54.603830 mon.0 [INF] pgmap v2251: 1127 pgs: 1127 
active+clean; 8954 MB data, 100 GB used, 27749 GB / 29340 GB avail; 7090 
kB/s wr, 2 op/s
2014-03-16 17:24:55.671644 mon.0 [INF] pgmap v2252: 1127 pgs: 1127 
active+clean; 9034 MB data, 100 GB used, 27749 GB / 29340 GB avail; 27465 
kB/s wr, 9 op/s
2014-03-16 17:24:57.057567 mon.0 [INF] pgmap v2253: 1127 pgs: 1127 
active+clean; 9041 MB data, 100 GB used, 27749 GB / 29340 GB avail; 39638 
kB/s wr, 13 op/s
2014-03-16 17:24:59.603449 mon.0 [INF] pgmap v2254: 1127 pgs: 1127 
active+clean; 9057 MB data, 100 GB used, 27749 GB / 29340 GB avail; 6019 
kB/s wr, 2 op/s
2014-03-16 17:25:00.671065 mon.0 [INF] pgmap v2255: 1127 pgs: 1127 
active+clean; 9138 MB data, 100 GB used, 27749 GB / 29340 GB avail; 25646 
kB/s wr, 9 op/s
2014-03-16 17:25:01.860269 mon.0 [INF] pgmap v2256: 1127 pgs: 1127 
active+clean; 9146 MB data, 100 GB used, 27749 GB / 29340 GB avail; 40427 
kB/s wr, 14 op/s
2014-03-16 17:25:04.561468 mon.0 [INF] pgmap v2257: 1127 pgs: 1127 
active+clean; 9162 MB data, 100 GB used, 27749 GB / 29340 GB avail; 6298 
kB/s wr, 2 op/s
2014-03-16 17:25:05.662565 mon.0 [INF] pgmap v2258: 1127 pgs: 1127 
active+clean; 9274 MB data, 101 GB used, 27748 GB / 29340 GB avail; 34520 
kB/s wr, 12 op/s
2014-03-16 17:25:06.851644 mon.0 [INF] pgmap v2259: 1127 pgs: 1127 
active+clean; 9286 MB data, 101 GB used, 27748 GB / 29340 GB avail; 56598 
kB/s wr, 19 op/s
2014-03-16 17:25:09.597428 mon.0 [INF] pgmap v2260: 1127 pgs: 1127 
active+clean; 9322 MB data, 101 GB used, 27748 GB / 29340 GB avail; 12426 
kB/s wr, 5 op/s
2014-03-16 17:25:10.765610 mon.0 [INF] pgmap v2261: 1127 pgs: 1127 
active+clean; 9392 MB data, 101 GB used, 27748 GB / 29340 GB avail; 27569 
kB/s wr, 13 op/s
2014-03-16 17:25:11.943055 mon.0 [INF] pgmap v2262: 1127 pgs: 1127 
active+clean; 9392 MB data, 101 GB used, 27748 GB / 29340 GB avail; 31581 
kB/s wr, 16 op/s


[root@storage1 ~]# ceph -s
cluster 3429fd17-4a92-4d3b-a7fa-04adedb0da82
 health HEALTH_OK
 monmap e1: 1 mons at {storage1=193.168.1.100:6789/0}, election epoch 
1, quorum 0 storage1
 osdmap e245: 16 osds: 16 up, 16 in
  pgmap v2273: 1127 pgs, 4 pools, 9393 MB data, 3607 objects
101 GB used, 27748 GB / 29340 GB avail
1127 active+clean

[root@storage1 ~]# ceph osd tree
# idweight  type name   up/down reweight
-1  16  root default
-2  16  host storage1
0   1   osd.0   up  1
1   1   osd.1   up  1
2   1   osd.2   up  1
3   1   osd.3   up  1
4   1   osd.4   up  1
5   1   osd.5   up  1
6   1   osd.6   up  1
7   1   osd.7   up  1
8   1   osd.8   up  1
9   1   osd.9   up  1
10  1   osd.10  up  1
11  1   osd.11  up  1
12  1   osd.12  up  1
13  1   osd.13  up  1
14  1   osd.14  up  1
15  1   osd.15  up  1

ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the 

Re: [ceph-users] Ceph Block Device install

2013-10-22 Thread Fuchs, Andreas (SwissTXT)
Hi Alfredo
Thanks for picking up on this

 -Original Message-
 From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
 Sent: Montag, 21. Oktober 2013 14:17
 To: Fuchs, Andreas (SwissTXT)
 Cc: ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] Ceph Block Device install
 
 On Mon, Oct 21, 2013 at 5:25 AM, Fuchs, Andreas (SwissTXT)
 andreas.fu...@swisstxt.ch wrote:
  Hi
 
  I try to install a client with ceph block device following the instructions 
  here:
  http://ceph.com/docs/master/start/quick-rbd/
 
  the client has a user ceph and ssh is setup passwordless also for sudo
  when I run ceph-deploy I see:
 
  On the ceph management host:
 
  ceph-deploy install 10.100.21.10
  [ceph_deploy.install][DEBUG ] Installing stable version dumpling on
  cluster ceph hosts 10.100.21.10 [ceph_deploy.install][DEBUG ] Detecting
 platform for host 10.100.21.10 ...
  [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with
  sudo [ceph_deploy][ERROR ] ClientInitException:
 
 
 Mmmn this doesn't look like the full log... it looks like its missing the 
 rest of
 the error? Unless that is where it stopped which would be terrible.
 
This is where it stoped, there is nothing more

 What version of ceph-deploy are you using?

ceph-deploy --version
1.2.7
 
But i saw, that the target where I wanted to mount the ceph block device is a 
CentOS 5, so this might never work anyway.
I think well do a NFS reexport of a rbd, but it would be nice if ceph-deploy 
would handle this a little nicer :-)

Andi

  On the target client in secure log:
 
  Oct 21 11:18:52 archiveadmin sshd[22320]: Accepted publickey for ceph
  from 10.100.220.110 port 47197 ssh2 Oct 21 11:18:52 archiveadmin
 sshd[22320]: pam_unix(sshd:session): session opened for user ceph by
 (uid=0)
  Oct 21 11:18:52 archiveadmin sudo: ceph : TTY=unknown ;
 PWD=/home/ceph ; USER=root ; COMMAND=/usr/bin/python -u -c exec
 reduce(lambda a,b: a+b, map(chr,
  Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued)
 (105,109,112,111,114,116,32,95,95,98,117,105,108,116,105,110,95,95,44,32,111
 ,115,44,32,109,97,114,115,104,97,108,44,32,115,121,115,10,116,114,121,58,10,
 32,32,32,32,105,109,112,111,114,116,32,104,97,115,104,108,105,98,10,101,120,
 99,101,112,116,32,73,109,112,111,114,116,69,114,114,111,114,58,10,32,32,32,3
 2,105,109,112,111,114,116,32,109,100,53,32,97,115,32,104,97,115,104,108,105,
 98,10,10,35,32,66,97,99,107,32,117,112,32,111,108,100,32,115,116,100,105,110
 ,47,115,116,100,111,117,116,46,10,115,116,100,111,117,116,32,61,32,111,115,4
 6,102,100,111,112,101,110,40,111,115,46,100,117,112,40,115,121,115,46,115,1
 16,100,111,117,116,46,102,105,108,101,110,111,40,41,41,44,32,34,119,98,34,44
 ,32,48,41,10,115,116,100,105,110,32,61,32,111,115,46,102,100,111,112,101,110
 ,40,111,115,46,100,117,112,40,115,121,115,46,115,116,100,105,110,46,102,105,
 108,101,110,111,40,41,41,44,32,34,114,98,34,44,32,48,41,10,116,114,121,58,10,
 32,32,32,32,
   105,1
  Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued)
 ,115,118,99,114,116,10,32,32,32,32,109,115,118,99,114,116,46,115,101,116,109
 ,111,100,101,40,115,116,100,111,117,116,46,102,105,108,101,110,111,40,41,44,
 32,111,115,46,79,95,66,73,78,65,82,89,41,10,32,32,32,32,109,115,118,99,114,11
 6,46,115,101,116,109,111,100,101,40,115,116,100,105,110,46,102,105,108,101,
 110,111,40,41,44,32,111,115,46,79,95,66,73,78,65,82,89,41,10,101,120,99,101,1
 12,116,32,73,109,112,111,114,116,69,114,114,111,114,58,32,112,97,115,115,10,
 115,121,115,46,115,116,100,111,117,116,46,99,108,111,115,101,40,41,10,115,1
 21,115,46,115,116,100,105,110,46,99,108,111,115,101,40,41,10,10,115,101,114,
 118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,32,61,32,49,57,57,5
 2,54,10,115,101,114,118,101,114,83,111,117,114,99,101,32,61,32,115,116,100,1
 05,110,46,114,101,97,100,40,115,101,114,118,101,114,83,111,117,114,99,101,7
 6,101,110,103,116,104,41,10,119,104,105,108,101,32,108,101,110,40,115,101,1
 14,118,101,114
   ,83,1
  Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued)
 0,32,115,101,114,118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,5
 8,10,32,32,32,32,115,101,114,118,101,114,83,111,117,114,99,101,32,43,61,32,1
 15,116,100,105,110,46,114,101,97,100,40,115,101,114,118,101,114,83,111,117,
 114,99,101,76,101,110,103,116,104,32,45,32,108,101,110,40,115,101,114,118,1
 01,114,83,111,117,114,99,101,41,41,10,10,116,114,121,58,10,32,32,32,32,97,11
 5,115,101,114,116,32,104,97,115,104,108,105,98,46,109,100,53,40,115,101,114,
 118,101,114,83,111,117,114,99,101,41,46,100,105,103,101,115,116,40,41,32,61,
 61,32,39,95,92,120,57,57,81,92,120,49,50,33,126,92,120,57,48,92,120,98,101,92
 ,120,102,98,92,120,99,51,92,120,56,97,92,120,56,101,89,33,92,120,101,100,92,1
 20,97,55,39,10,32,32,32,32,95,95,98,117,105,108,116,105,110,95,95,46,112,117,
 115,104,121,95,115,111,117,114,99,101,32,61,32,115,101,114,118,101,114,83,1
 11,117,114,99,101,10,32,32,32,32,115,101,114,118,101,114,67,111,100,101,32,6

Re: [ceph-users] Ceph Block Device install

2013-10-22 Thread Alfredo Deza
On Tue, Oct 22, 2013 at 3:39 AM, Fuchs, Andreas (SwissTXT)
andreas.fu...@swisstxt.ch wrote:
 Hi Alfredo
 Thanks for picking up on this

 -Original Message-
 From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
 Sent: Montag, 21. Oktober 2013 14:17
 To: Fuchs, Andreas (SwissTXT)
 Cc: ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] Ceph Block Device install

 On Mon, Oct 21, 2013 at 5:25 AM, Fuchs, Andreas (SwissTXT)
 andreas.fu...@swisstxt.ch wrote:
  Hi
 
  I try to install a client with ceph block device following the 
  instructions here:
  http://ceph.com/docs/master/start/quick-rbd/
 
  the client has a user ceph and ssh is setup passwordless also for sudo
  when I run ceph-deploy I see:
 
  On the ceph management host:
 
  ceph-deploy install 10.100.21.10
  [ceph_deploy.install][DEBUG ] Installing stable version dumpling on
  cluster ceph hosts 10.100.21.10 [ceph_deploy.install][DEBUG ] Detecting
 platform for host 10.100.21.10 ...
  [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with
  sudo [ceph_deploy][ERROR ] ClientInitException:
 

 Mmmn this doesn't look like the full log... it looks like its missing the 
 rest of
 the error? Unless that is where it stopped which would be terrible.

 This is where it stoped, there is nothing more

 What version of ceph-deploy are you using?

 ceph-deploy --version
 1.2.7

 But i saw, that the target where I wanted to mount the ceph block device is a 
 CentOS 5, so this might never work anyway.

But we do support CentOS 5. We actually test extensively for that OS.

Where you able to create the ceph config and deploy the monitors
correctly and basically complete the 'Quick Cluster' guide before
attempting this?

It does look a bit odd that you are specifying an IP  here, any reason
not to specify the hosts you defined when you did `ceph-deploy new` ?

On ceph-deploy 1.2.7 all commands start with an INFO statement
displaying the version number and where did the executable come from
and I
don't see that from your output. Also, what user are you using to
execute ceph-deploy?

Where you able to see any errors before getting to this point going
through the Quick Cluster guide?

 I think well do a NFS reexport of a rbd, but it would be nice if ceph-deploy 
 would handle this a little nicer :-)

 Andi

  On the target client in secure log:
 
  Oct 21 11:18:52 archiveadmin sshd[22320]: Accepted publickey for ceph
  from 10.100.220.110 port 47197 ssh2 Oct 21 11:18:52 archiveadmin
 sshd[22320]: pam_unix(sshd:session): session opened for user ceph by
 (uid=0)
  Oct 21 11:18:52 archiveadmin sudo: ceph : TTY=unknown ;
 PWD=/home/ceph ; USER=root ; COMMAND=/usr/bin/python -u -c exec
 reduce(lambda a,b: a+b, map(chr,
  Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued)
 (105,109,112,111,114,116,32,95,95,98,117,105,108,116,105,110,95,95,44,32,111
 ,115,44,32,109,97,114,115,104,97,108,44,32,115,121,115,10,116,114,121,58,10,
 32,32,32,32,105,109,112,111,114,116,32,104,97,115,104,108,105,98,10,101,120,
 99,101,112,116,32,73,109,112,111,114,116,69,114,114,111,114,58,10,32,32,32,3
 2,105,109,112,111,114,116,32,109,100,53,32,97,115,32,104,97,115,104,108,105,
 98,10,10,35,32,66,97,99,107,32,117,112,32,111,108,100,32,115,116,100,105,110
 ,47,115,116,100,111,117,116,46,10,115,116,100,111,117,116,32,61,32,111,115,4
 6,102,100,111,112,101,110,40,111,115,46,100,117,112,40,115,121,115,46,115,1
 16,100,111,117,116,46,102,105,108,101,110,111,40,41,41,44,32,34,119,98,34,44
 ,32,48,41,10,115,116,100,105,110,32,61,32,111,115,46,102,100,111,112,101,110
 ,40,111,115,46,100,117,112,40,115,121,115,46,115,116,100,105,110,46,102,105,
 108,101,110,111,40,41,41,44,32,34,114,98,34,44,32,48,41,10,116,114,121,58,10,
 32,32,32,32,
   105,1
  Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued)
 ,115,118,99,114,116,10,32,32,32,32,109,115,118,99,114,116,46,115,101,116,109
 ,111,100,101,40,115,116,100,111,117,116,46,102,105,108,101,110,111,40,41,44,
 32,111,115,46,79,95,66,73,78,65,82,89,41,10,32,32,32,32,109,115,118,99,114,11
 6,46,115,101,116,109,111,100,101,40,115,116,100,105,110,46,102,105,108,101,
 110,111,40,41,44,32,111,115,46,79,95,66,73,78,65,82,89,41,10,101,120,99,101,1
 12,116,32,73,109,112,111,114,116,69,114,114,111,114,58,32,112,97,115,115,10,
 115,121,115,46,115,116,100,111,117,116,46,99,108,111,115,101,40,41,10,115,1
 21,115,46,115,116,100,105,110,46,99,108,111,115,101,40,41,10,10,115,101,114,
 118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,32,61,32,49,57,57,5
 2,54,10,115,101,114,118,101,114,83,111,117,114,99,101,32,61,32,115,116,100,1
 05,110,46,114,101,97,100,40,115,101,114,118,101,114,83,111,117,114,99,101,7
 6,101,110,103,116,104,41,10,119,104,105,108,101,32,108,101,110,40,115,101,1
 14,118,101,114
   ,83,1
  Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued)
 0,32,115,101,114,118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,5

Re: [ceph-users] Ceph Block Device install

2013-10-22 Thread Fuchs, Andreas (SwissTXT)


 -Original Message-
 From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
 Sent: Dienstag, 22. Oktober 2013 14:16
 To: Fuchs, Andreas (SwissTXT)
 Cc: ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] Ceph Block Device install
 
 On Tue, Oct 22, 2013 at 3:39 AM, Fuchs, Andreas (SwissTXT)
 andreas.fu...@swisstxt.ch wrote:
  Hi Alfredo
  Thanks for picking up on this
 
  -Original Message-
  From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
  Sent: Montag, 21. Oktober 2013 14:17
  To: Fuchs, Andreas (SwissTXT)
  Cc: ceph-users@lists.ceph.com
  Subject: Re: [ceph-users] Ceph Block Device install
 
  On Mon, Oct 21, 2013 at 5:25 AM, Fuchs, Andreas (SwissTXT)
  andreas.fu...@swisstxt.ch wrote:
   Hi
  
   I try to install a client with ceph block device following the 
   instructions
 here:
   http://ceph.com/docs/master/start/quick-rbd/
  
   the client has a user ceph and ssh is setup passwordless also for
   sudo when I run ceph-deploy I see:
  
   On the ceph management host:
  
   ceph-deploy install 10.100.21.10
   [ceph_deploy.install][DEBUG ] Installing stable version dumpling on
   cluster ceph hosts 10.100.21.10 [ceph_deploy.install][DEBUG ]
   Detecting
  platform for host 10.100.21.10 ...
   [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with
   sudo [ceph_deploy][ERROR ] ClientInitException:
  
 
  Mmmn this doesn't look like the full log... it looks like its missing
  the rest of the error? Unless that is where it stopped which would be
 terrible.
 
  This is where it stoped, there is nothing more
 
  What version of ceph-deploy are you using?
 
  ceph-deploy --version
  1.2.7
 
  But i saw, that the target where I wanted to mount the ceph block device is
 a CentOS 5, so this might never work anyway.
 
 But we do support CentOS 5. We actually test extensively for that OS.
 
 Where you able to create the ceph config and deploy the monitors correctly
 and basically complete the 'Quick Cluster' guide before attempting this?
 
 It does look a bit odd that you are specifying an IP  here, any reason not to
 specify the hosts you defined when you did `ceph-deploy new` ?
 
 On ceph-deploy 1.2.7 all commands start with an INFO statement displaying
 the version number and where did the executable come from and I don't see
 that from your output. Also, what user are you using to execute ceph-
 deploy?
 
 Where you able to see any errors before getting to this point going through
 the Quick Cluster guide?
 
We are runing a ceph cluster and radosgw succesfully. This cluster has been 
successfully installed with ceph-deploy, but it's kinda not related.

The host I wanted to install rbd on is a client and should NOT be part of the 
cluster. I thought in this scenario ceph-deploy is only used to add repo and 
doing installation and key distribution, but maybe I misunderstand?

For CentOS 5 I don't see any rpm's in the repos so that's why I assume it's not 
supported, our ceph cluster run's on centos 6 that's working fine.



  I think well do a NFS reexport of a rbd, but it would be nice if
  ceph-deploy would handle this a little nicer :-)
 
  Andi
 
   On the target client in secure log:
  
   Oct 21 11:18:52 archiveadmin sshd[22320]: Accepted publickey for
   ceph from 10.100.220.110 port 47197 ssh2 Oct 21 11:18:52
   archiveadmin
  sshd[22320]: pam_unix(sshd:session): session opened for user ceph by
  (uid=0)
   Oct 21 11:18:52 archiveadmin sudo: ceph : TTY=unknown ;
  PWD=/home/ceph ; USER=root ; COMMAND=/usr/bin/python -u -c exec
  reduce(lambda a,b: a+b, map(chr,
   Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued)
  (105,109,112,111,114,116,32,95,95,98,117,105,108,116,105,110,95,95,44
  ,32,111
  ,115,44,32,109,97,114,115,104,97,108,44,32,115,121,115,10,116,114,121
  ,58,10,
  32,32,32,32,105,109,112,111,114,116,32,104,97,115,104,108,105,98,10,1
  01,120,
  99,101,112,116,32,73,109,112,111,114,116,69,114,114,111,114,58,10,32,
  32,32,3
  2,105,109,112,111,114,116,32,109,100,53,32,97,115,32,104,97,115,104,1
  08,105,
  98,10,10,35,32,66,97,99,107,32,117,112,32,111,108,100,32,115,116,100,
  105,110
  ,47,115,116,100,111,117,116,46,10,115,116,100,111,117,116,32,61,32,11
  1,115,4
  6,102,100,111,112,101,110,40,111,115,46,100,117,112,40,115,121,115,46
  ,115,1
  16,100,111,117,116,46,102,105,108,101,110,111,40,41,41,44,32,34,119,9
  8,34,44
  ,32,48,41,10,115,116,100,105,110,32,61,32,111,115,46,102,100,111,112,
  101,110
  ,40,111,115,46,100,117,112,40,115,121,115,46,115,116,100,105,110,46,1
  02,105,
  108,101,110,111,40,41,41,44,32,34,114,98,34,44,32,48,41,10,116,114,12
  1,58,10,
  32,32,32,32,
105,1
   Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued)
  ,115,118,99,114,116,10,32,32,32,32,109,115,118,99,114,116,46,115,101,
  116,109
  ,111,100,101,40,115,116,100,111,117,116,46,102,105,108,101,110,111,40
  ,41,44,
  32,111,115,46,79,95,66,73,78,65,82,89,41,10,32,32,32,32,109,115,118,9
  9,114,11

[ceph-users] Ceph Block Device install

2013-10-21 Thread Fuchs, Andreas (SwissTXT)
Hi

I try to install a client with ceph block device following the instructions 
here:
http://ceph.com/docs/master/start/quick-rbd/

the client has a user ceph and ssh is setup passwordless also for sudo
when I run ceph-deploy I see:

On the ceph management host:

ceph-deploy install 10.100.21.10
[ceph_deploy.install][DEBUG ] Installing stable version dumpling on cluster 
ceph hosts 10.100.21.10
[ceph_deploy.install][DEBUG ] Detecting platform for host 10.100.21.10 ...
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
[ceph_deploy][ERROR ] ClientInitException:

On the target client in secure log:

Oct 21 11:18:52 archiveadmin sshd[22320]: Accepted publickey for ceph from 
10.100.220.110 port 47197 ssh2
Oct 21 11:18:52 archiveadmin sshd[22320]: pam_unix(sshd:session): session 
opened for user ceph by (uid=0)
Oct 21 11:18:52 archiveadmin sudo: ceph : TTY=unknown ; PWD=/home/ceph ; 
USER=root ; COMMAND=/usr/bin/python -u -c exec reduce(lambda a,b: a+b, map(chr,
Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued) 
(105,109,112,111,114,116,32,95,95,98,117,105,108,116,105,110,95,95,44,32,111,115,44,32,109,97,114,115,104,97,108,44,32,115,121,115,10,116,114,121,58,10,32,32,32,32,105,109,112,111,114,116,32,104,97,115,104,108,105,98,10,101,120,99,101,112,116,32,73,109,112,111,114,116,69,114,114,111,114,58,10,32,32,32,32,105,109,112,111,114,116,32,109,100,53,32,97,115,32,104,97,115,104,108,105,98,10,10,35,32,66,97,99,107,32,117,112,32,111,108,100,32,115,116,100,105,110,47,115,116,100,111,117,116,46,10,115,116,100,111,117,116,32,61,32,111,115,46,102,100,111,112,101,110,40,111,115,46,100,117,112,40,115,121,115,46,115,116,100,111,117,116,46,102,105,108,101,110,111,40,41,41,44,32,34,119,98,34,44,32,48,41,10,115,116,100,105,110,32,61,32,111,115,46,102,100,111,112,101,110,40,111,115,46,100,117,112,40,115,121,115,46,115,116,100,105,110,46,102,105,108,101,110,111,40,41,41,44,32,34,114,98,34,44,32,48,41,10,116,114,121,58,10,32,32,32,32,
 105,1
Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued) 
,115,118,99,114,116,10,32,32,32,32,109,115,118,99,114,116,46,115,101,116,109,111,100,101,40,115,116,100,111,117,116,46,102,105,108,101,110,111,40,41,44,32,111,115,46,79,95,66,73,78,65,82,89,41,10,32,32,32,32,109,115,118,99,114,116,46,115,101,116,109,111,100,101,40,115,116,100,105,110,46,102,105,108,101,110,111,40,41,44,32,111,115,46,79,95,66,73,78,65,82,89,41,10,101,120,99,101,112,116,32,73,109,112,111,114,116,69,114,114,111,114,58,32,112,97,115,115,10,115,121,115,46,115,116,100,111,117,116,46,99,108,111,115,101,40,41,10,115,121,115,46,115,116,100,105,110,46,99,108,111,115,101,40,41,10,10,115,101,114,118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,32,61,32,49,57,57,52,54,10,115,101,114,118,101,114,83,111,117,114,99,101,32,61,32,115,116,100,105,110,46,114,101,97,100,40,115,101,114,118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,41,10,119,104,105,108,101,32,108,101,110,40,115,101,114,118,101,114
 ,83,1
Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued) 
0,32,115,101,114,118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,58,10,32,32,32,32,115,101,114,118,101,114,83,111,117,114,99,101,32,43,61,32,115,116,100,105,110,46,114,101,97,100,40,115,101,114,118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,32,45,32,108,101,110,40,115,101,114,118,101,114,83,111,117,114,99,101,41,41,10,10,116,114,121,58,10,32,32,32,32,97,115,115,101,114,116,32,104,97,115,104,108,105,98,46,109,100,53,40,115,101,114,118,101,114,83,111,117,114,99,101,41,46,100,105,103,101,115,116,40,41,32,61,61,32,39,95,92,120,57,57,81,92,120,49,50,33,126,92,120,57,48,92,120,98,101,92,120,102,98,92,120,99,51,92,120,56,97,92,120,56,101,89,33,92,120,101,100,92,120,97,55,39,10,32,32,32,32,95,95,98,117,105,108,116,105,110,95,95,46,112,117,115,104,121,95,115,111,117,114,99,101,32,61,32,115,101,114,118,101,114,83,111,117,114,99,101,10,32,32,32,32,115,101,114,118,101,114,67,111,100,101,32,61,32,109,97,1
 14,11
Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued) 
7,100,115,40,115,101,114,118,101,114,83,111,117,114,99,101,41,10,32,32,32,32,101,120,101,99,32,115,101,114,118,101,114,67,111,100,101,10,32,32,32,32,112,117,115,104,121,95,115,101,114,118,101,114,40,115,116,100,105,110,44,32,115,116,100,111,117,116,41,10,101,120,99,101,112,116,58,10,32,32,32,32,105,109,112,111,114,116,32,116,114,97,99,101,98,97,99,107,10,32,32,32,32,35,32,85,110,99,111,109,109,101,110,116,32,102,111,114,32,100,101,98,117,103,103,105,110,103,10,32,32,32,32,35,32,116,114,97,99,101,98,97,99,107,46,112,114,105,110,116,95,101,120,99,40,102,105,108,101,61,111,112,101,110,40,34,115,116,100,101,114,114,46,116,120,116,34,44,32,34,119,34,41,41,10,32,32,32,32,114,97,105,115,101)))
Oct 21 11:18:52 archiveadmin sshd[22322]: Received disconnect from 
10.100.220.110: 11: disconnected by user
Oct 21 11:18:52 archiveadmin sshd[22320]: pam_unix(sshd:session): session 
closed for user ceph
Oct 21 11:22:24 archiveadmin 

Re: [ceph-users] Ceph Block Device install

2013-10-21 Thread Alfredo Deza
On Mon, Oct 21, 2013 at 5:25 AM, Fuchs, Andreas (SwissTXT)
andreas.fu...@swisstxt.ch wrote:
 Hi

 I try to install a client with ceph block device following the instructions 
 here:
 http://ceph.com/docs/master/start/quick-rbd/

 the client has a user ceph and ssh is setup passwordless also for sudo
 when I run ceph-deploy I see:

 On the ceph management host:

 ceph-deploy install 10.100.21.10
 [ceph_deploy.install][DEBUG ] Installing stable version dumpling on cluster 
 ceph hosts 10.100.21.10
 [ceph_deploy.install][DEBUG ] Detecting platform for host 10.100.21.10 ...
 [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
 [ceph_deploy][ERROR ] ClientInitException:


Mmmn this doesn't look like the full log... it looks like its missing
the rest of the error? Unless that is where it stopped which would be
terrible.

What version of ceph-deploy are you using?

 On the target client in secure log:

 Oct 21 11:18:52 archiveadmin sshd[22320]: Accepted publickey for ceph from 
 10.100.220.110 port 47197 ssh2
 Oct 21 11:18:52 archiveadmin sshd[22320]: pam_unix(sshd:session): session 
 opened for user ceph by (uid=0)
 Oct 21 11:18:52 archiveadmin sudo: ceph : TTY=unknown ; PWD=/home/ceph ; 
 USER=root ; COMMAND=/usr/bin/python -u -c exec reduce(lambda a,b: a+b, 
 map(chr,
 Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued) 
 (105,109,112,111,114,116,32,95,95,98,117,105,108,116,105,110,95,95,44,32,111,115,44,32,109,97,114,115,104,97,108,44,32,115,121,115,10,116,114,121,58,10,32,32,32,32,105,109,112,111,114,116,32,104,97,115,104,108,105,98,10,101,120,99,101,112,116,32,73,109,112,111,114,116,69,114,114,111,114,58,10,32,32,32,32,105,109,112,111,114,116,32,109,100,53,32,97,115,32,104,97,115,104,108,105,98,10,10,35,32,66,97,99,107,32,117,112,32,111,108,100,32,115,116,100,105,110,47,115,116,100,111,117,116,46,10,115,116,100,111,117,116,32,61,32,111,115,46,102,100,111,112,101,110,40,111,115,46,100,117,112,40,115,121,115,46,115,116,100,111,117,116,46,102,105,108,101,110,111,40,41,41,44,32,34,119,98,34,44,32,48,41,10,115,116,100,105,110,32,61,32,111,115,46,102,100,111,112,101,110,40,111,115,46,100,117,112,40,115,121,115,46,115,116,100,105,110,46,102,105,108,101,110,111,40,41,41,44,32,34,114,98,34,44,32,48,41,10,116,114,121,58,10,32,32,32,3
 2,
  105,1
 Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued) 
 ,115,118,99,114,116,10,32,32,32,32,109,115,118,99,114,116,46,115,101,116,109,111,100,101,40,115,116,100,111,117,116,46,102,105,108,101,110,111,40,41,44,32,111,115,46,79,95,66,73,78,65,82,89,41,10,32,32,32,32,109,115,118,99,114,116,46,115,101,116,109,111,100,101,40,115,116,100,105,110,46,102,105,108,101,110,111,40,41,44,32,111,115,46,79,95,66,73,78,65,82,89,41,10,101,120,99,101,112,116,32,73,109,112,111,114,116,69,114,114,111,114,58,32,112,97,115,115,10,115,121,115,46,115,116,100,111,117,116,46,99,108,111,115,101,40,41,10,115,121,115,46,115,116,100,105,110,46,99,108,111,115,101,40,41,10,10,115,101,114,118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,32,61,32,49,57,57,52,54,10,115,101,114,118,101,114,83,111,117,114,99,101,32,61,32,115,116,100,105,110,46,114,101,97,100,40,115,101,114,118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,41,10,119,104,105,108,101,32,108,101,110,40,115,101,114,118,101,1
 14
  ,83,1
 Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued) 
 0,32,115,101,114,118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,58,10,32,32,32,32,115,101,114,118,101,114,83,111,117,114,99,101,32,43,61,32,115,116,100,105,110,46,114,101,97,100,40,115,101,114,118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,32,45,32,108,101,110,40,115,101,114,118,101,114,83,111,117,114,99,101,41,41,10,10,116,114,121,58,10,32,32,32,32,97,115,115,101,114,116,32,104,97,115,104,108,105,98,46,109,100,53,40,115,101,114,118,101,114,83,111,117,114,99,101,41,46,100,105,103,101,115,116,40,41,32,61,61,32,39,95,92,120,57,57,81,92,120,49,50,33,126,92,120,57,48,92,120,98,101,92,120,102,98,92,120,99,51,92,120,56,97,92,120,56,101,89,33,92,120,101,100,92,120,97,55,39,10,32,32,32,32,95,95,98,117,105,108,116,105,110,95,95,46,112,117,115,104,121,95,115,111,117,114,99,101,32,61,32,115,101,114,118,101,114,83,111,117,114,99,101,10,32,32,32,32,115,101,114,118,101,114,67,111,100,101,32,61,32,109,97
 ,1
  14,11
 Oct 21 11:18:52 archiveadmin sudo: ceph : (command continued)