On Tue, Oct 22, 2013 at 3:39 AM, Fuchs, Andreas (SwissTXT)
<andreas.fu...@swisstxt.ch> wrote:
> Hi Alfredo
> Thanks for picking up on this
>
>> -----Original Message-----
>> From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>> Sent: Montag, 21. Oktober 2013 14:17
>> To: Fuchs, Andreas (SwissTXT)
>> Cc: ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] Ceph Block Device install
>>
>> On Mon, Oct 21, 2013 at 5:25 AM, Fuchs, Andreas (SwissTXT)
>> <andreas.fu...@swisstxt.ch> wrote:
>> > Hi
>> >
>> > I try to install a client with ceph block device following the 
>> > instructions here:
>> > http://ceph.com/docs/master/start/quick-rbd/
>> >
>> > the client has a user ceph and ssh is setup passwordless also for sudo
>> > when I run ceph-deploy I see:
>> >
>> > On the ceph management host:
>> >
>> > ceph-deploy install 10.100.21.10
>> > [ceph_deploy.install][DEBUG ] Installing stable version dumpling on
>> > cluster ceph hosts 10.100.21.10 [ceph_deploy.install][DEBUG ] Detecting
>> platform for host 10.100.21.10 ...
>> > [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with
>> > sudo [ceph_deploy][ERROR ] ClientInitException:
>> >
>>
>> Mmmn this doesn't look like the full log... it looks like its missing the 
>> rest of
>> the error? Unless that is where it stopped which would be terrible.
>>
> This is where it stoped, there is nothing more
>
>> What version of ceph-deploy are you using?
>>
> ceph-deploy --version
> 1.2.7
>
> But i saw, that the target where I wanted to mount the ceph block device is a 
> CentOS 5, so this might never work anyway.

But we do support CentOS 5. We actually test extensively for that OS.

Where you able to create the ceph config and deploy the monitors
correctly and basically complete the 'Quick Cluster' guide before
attempting this?

It does look a bit odd that you are specifying an IP  here, any reason
not to specify the hosts you defined when you did `ceph-deploy new` ?

On ceph-deploy 1.2.7 all commands start with an INFO statement
displaying the version number and where did the executable come from
and I
don't see that from your output. Also, what user are you using to
execute ceph-deploy?

Where you able to see any errors before getting to this point going
through the Quick Cluster guide?

> I think well do a NFS reexport of a rbd, but it would be nice if ceph-deploy 
> would handle this a little nicer :-)
>
> Andi
>
>> > On the target client in secure log:
>> >
>> > Oct 21 11:18:52 archiveadmin sshd[22320]: Accepted publickey for ceph
>> > from 10.100.220.110 port 47197 ssh2 Oct 21 11:18:52 archiveadmin
>> sshd[22320]: pam_unix(sshd:session): session opened for user ceph by
>> (uid=0)
>> > Oct 21 11:18:52 archiveadmin sudo:     ceph : TTY=unknown ;
>> PWD=/home/ceph ; USER=root ; COMMAND=/usr/bin/python -u -c exec
>> reduce(lambda a,b: a+b, map(chr,
>> > Oct 21 11:18:52 archiveadmin sudo:     ceph : (command continued)
>> (105,109,112,111,114,116,32,95,95,98,117,105,108,116,105,110,95,95,44,32,111
>> ,115,44,32,109,97,114,115,104,97,108,44,32,115,121,115,10,116,114,121,58,10,
>> 32,32,32,32,105,109,112,111,114,116,32,104,97,115,104,108,105,98,10,101,120,
>> 99,101,112,116,32,73,109,112,111,114,116,69,114,114,111,114,58,10,32,32,32,3
>> 2,105,109,112,111,114,116,32,109,100,53,32,97,115,32,104,97,115,104,108,105,
>> 98,10,10,35,32,66,97,99,107,32,117,112,32,111,108,100,32,115,116,100,105,110
>> ,47,115,116,100,111,117,116,46,10,115,116,100,111,117,116,32,61,32,111,115,4
>> 6,102,100,111,112,101,110,40,111,115,46,100,117,112,40,115,121,115,46,115,1
>> 16,100,111,117,116,46,102,105,108,101,110,111,40,41,41,44,32,34,119,98,34,44
>> ,32,48,41,10,115,116,100,105,110,32,61,32,111,115,46,102,100,111,112,101,110
>> ,40,111,115,46,100,117,112,40,115,121,115,46,115,116,100,105,110,46,102,105,
>> 108,101,110,111,40,41,41,44,32,34,114,98,34,44,32,48,41,10,116,114,121,58,10,
>> 32,32,32,32,
>> >  105,1
>> > Oct 21 11:18:52 archiveadmin sudo:     ceph : (command continued)
>> ,115,118,99,114,116,10,32,32,32,32,109,115,118,99,114,116,46,115,101,116,109
>> ,111,100,101,40,115,116,100,111,117,116,46,102,105,108,101,110,111,40,41,44,
>> 32,111,115,46,79,95,66,73,78,65,82,89,41,10,32,32,32,32,109,115,118,99,114,11
>> 6,46,115,101,116,109,111,100,101,40,115,116,100,105,110,46,102,105,108,101,
>> 110,111,40,41,44,32,111,115,46,79,95,66,73,78,65,82,89,41,10,101,120,99,101,1
>> 12,116,32,73,109,112,111,114,116,69,114,114,111,114,58,32,112,97,115,115,10,
>> 115,121,115,46,115,116,100,111,117,116,46,99,108,111,115,101,40,41,10,115,1
>> 21,115,46,115,116,100,105,110,46,99,108,111,115,101,40,41,10,10,115,101,114,
>> 118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,32,61,32,49,57,57,5
>> 2,54,10,115,101,114,118,101,114,83,111,117,114,99,101,32,61,32,115,116,100,1
>> 05,110,46,114,101,97,100,40,115,101,114,118,101,114,83,111,117,114,99,101,7
>> 6,101,110,103,116,104,41,10,119,104,105,108,101,32,108,101,110,40,115,101,1
>> 14,118,101,114
>> >  ,83,1
>> > Oct 21 11:18:52 archiveadmin sudo:     ceph : (command continued)
>> 0,32,115,101,114,118,101,114,83,111,117,114,99,101,76,101,110,103,116,104,5
>> 8,10,32,32,32,32,115,101,114,118,101,114,83,111,117,114,99,101,32,43,61,32,1
>> 15,116,100,105,110,46,114,101,97,100,40,115,101,114,118,101,114,83,111,117,
>> 114,99,101,76,101,110,103,116,104,32,45,32,108,101,110,40,115,101,114,118,1
>> 01,114,83,111,117,114,99,101,41,41,10,10,116,114,121,58,10,32,32,32,32,97,11
>> 5,115,101,114,116,32,104,97,115,104,108,105,98,46,109,100,53,40,115,101,114,
>> 118,101,114,83,111,117,114,99,101,41,46,100,105,103,101,115,116,40,41,32,61,
>> 61,32,39,95,92,120,57,57,81,92,120,49,50,33,126,92,120,57,48,92,120,98,101,92
>> ,120,102,98,92,120,99,51,92,120,56,97,92,120,56,101,89,33,92,120,101,100,92,1
>> 20,97,55,39,10,32,32,32,32,95,95,98,117,105,108,116,105,110,95,95,46,112,117,
>> 115,104,121,95,115,111,117,114,99,101,32,61,32,115,101,114,118,101,114,83,1
>> 11,117,114,99,101,10,32,32,32,32,115,101,114,118,101,114,67,111,100,101,32,6
>> 1,32,109,97,1
>> >  14,11
>> > Oct 21 11:18:52 archiveadmin sudo:     ceph : (command continued)
>> 7,100,115,40,115,101,114,118,101,114,83,111,117,114,99,101,41,10,32,32,32,32
>> ,101,120,101,99,32,115,101,114,118,101,114,67,111,100,101,10,32,32,32,32,112
>> ,117,115,104,121,95,115,101,114,118,101,114,40,115,116,100,105,110,44,32,11
>> 5,116,100,111,117,116,41,10,101,120,99,101,112,116,58,10,32,32,32,32,105,109
>> ,112,111,114,116,32,116,114,97,99,101,98,97,99,107,10,32,32,32,32,35,32,85,11
>> 0,99,111,109,109,101,110,116,32,102,111,114,32,100,101,98,117,103,103,105,1
>> 10,103,10,32,32,32,32,35,32,116,114,97,99,101,98,97,99,107,46,112,114,105,11
>> 0,116,95,101,120,99,40,102,105,108,101,61,111,112,101,110,40,34,115,116,100,
>> 101,114,114,46,116,120,116,34,44,32,34,119,34,41,41,10,32,32,32,32,114,97,10
>> 5,115,101)))
>> > Oct 21 11:18:52 archiveadmin sshd[22322]: Received disconnect from
>> > 10.100.220.110: 11: disconnected by user Oct 21 11:18:52 archiveadmin
>> > sshd[22320]: pam_unix(sshd:session): session closed for user ceph Oct
>> > 21 11:22:24 archiveadmin sshd[22367]: Connection closed by
>> > 10.100.220.182
>> >
>> >
>> > Any ideas what's going wrong?
>> >
>> > Andi
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to