Not sure why you're able to run the 'rados' and 'ceph' command, and
not 'radosgw', just note that the former two don't connect to the
osds, whereas the latter does, so it might fail on a different level.
You're using the default client.admin as the user for radosgw, but
your ceph.conf file doesn't have a section for it and all the relevant
configurables are under client.radosgw.gateway. Try fixing that first.

Yehuda

On Mon, Nov 4, 2013 at 12:30 PM, Gruher, Joseph R
<joseph.r.gru...@intel.com> wrote:
> Sorry to bump this, but does anyone have any idea what could be wrong here?
>
> To resummarize, radosgw fails to start.  Debug output seems to indicate it is 
> complaining about the keyring, but the keyring is present and readable, and 
> other Ceph functions which require the keyring can success.  So why can't 
> radosgw start?  Details below.
>
...
>>>2013-11-01 10:59:47.018774 7f83978e4820  0 librados: client.admin
...
>>>[ceph@joceph08 ceph]$ pwd
>>>/etc/ceph
>>>
>>>[ceph@joceph08 ceph]$ ls
>>>ceph.client.admin.keyring  ceph.conf  keyring.radosgw.gateway  rbdmap
>>>
>>>[ceph@joceph08 ceph]$ cat ceph.client.admin.keyring [client.admin]
>>>        key = AQCYyHJSCFH3BBAA472q80qrAiIIVbvJfK/47A==
>>>
>>>[ceph@joceph08 ceph]$ cat keyring.radosgw.gateway
>>>[client.radosgw.gateway]
>>>        key = AQBh6nNS0Cu3HxAAMxLsbEYZ3pEbwEBajQb1WA==
>>>        caps mon = "allow rw"
>>>        caps osd = "allow rwx"
>>>
>>>[ceph@joceph08 ceph]$ cat ceph.conf
>>>[client.radosgw.joceph08]
>>>host = joceph08
>>>log_file = /var/log/ceph/radosgw.log
>>>keyring = /etc/ceph/keyring.radosgw.gateway rgw_socket_path =
>>>/tmp/radosgw.sock
>>>
>>>[global]
>>>auth_service_required = cephx
>>>filestore_xattr_use_omap = true
>>>auth_client_required = cephx
>>>auth_cluster_required = cephx
>>>mon_host = 10.23.37.142,10.23.37.145,10.23.37.161,10.23.37.165
>>>osd_journal_size = 1024
>>>mon_initial_members = joceph01, joceph02, joceph03, joceph04 fsid =
>>>74d808db-aaa7-41d2-8a84-7d590327a3c7
>>
>>By the way, I can run other commands on the node which I think must require
>>the keyring. they succeed.
>>
>>[ceph@joceph08 ceph]$ sudo /usr/bin/radosgw -d -c /etc/ceph/ceph.conf --
>>debug-rgw 20 --debug-ms 1 start
>>2013-11-01 11:45:07.935483 7ff2e2f11820  0 ceph version 0.67.4
>>(ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7), process radosgw, pid 19265
>>2013-11-01 11:45:07.935488 7ff2e2f11820 -1 WARNING: libcurl doesn't support
>>curl_multi_wait()
>>2013-11-01 11:45:07.935489 7ff2e2f11820 -1 WARNING: cross zone / region
>>transfer performance may be affected
>>2013-11-01 11:45:07.938719 7ff2e2f11820  1 -- :/0 messenger.start
>>2013-11-01 11:45:07.938817 7ff2e2f11820 -1 monclient(hunting): ERROR:
>>missing keyring, cannot use cephx for authentication
>>2013-11-01 11:45:07.938818 7ff2e2f11820  0 librados: client.admin 
>>initialization
>>error (2) No such file or directory
>>2013-11-01 11:45:07.938832 7ff2e2f11820  1 -- :/1019265 mark_down_all
>>2013-11-01 11:45:07.939150 7ff2e2f11820  1 -- :/1019265 shutdown complete.
>>2013-11-01 11:45:07.939219 7ff2e2f11820 -1 Couldn't init storage provider
>>(RADOS)
>>
>>[ceph@joceph08 ceph]$ rados df
>>pool name       category                 KB      objects       clones     
>>degraded      unfound
>>rd        rd KB           wr        wr KB
>>data            -                          0            0            0        
>>    0           0            0            0            0
>>0
>>metadata        -                          0            0            0        
>>    0           0            0            0            0
>>0
>>rbd             -                          0            0            0        
>>    0           0            0            0            0
>>0
>>  total used          630648            0
>>  total avail    11714822792
>>  total space    11715453440
>>
>>[ceph@joceph08 ceph]$ ceph status
>>  cluster 74d808db-aaa7-41d2-8a84-7d590327a3c7
>>   health HEALTH_OK
>>   monmap e1: 4 mons at
>>{joceph01=10.23.37.142:6789/0,joceph02=10.23.37.145:6789/0,joceph03=10.2
>>3.37.161:6789/0,joceph04=10.23.37.165:6789/0}, election epoch 8, quorum
>>0,1,2,3 joceph01,joceph02,joceph03,joceph04
>>   osdmap e88: 16 osds: 16 up, 16 in
>>    pgmap v1402: 2400 pgs: 2400 active+clean; 0 bytes data, 615 MB used, 11172
>>GB / 11172 GB avail
>>   mdsmap e1: 0/0/1 up
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to