Hi all-

I am following the object storage quick start guide.  I have a cluster with two 
OSDs and have followed the steps on both.  Both are failing to start radosgw 
but each in a different manner.  All the previous steps in the quick start 
guide appeared to complete successfully.  Any tips on how to debug from here?  
Thanks!


OSD1:

ceph@cephtest05:/etc/ceph$ sudo /etc/init.d/radosgw start
ceph@cephtest05:/etc/ceph$

ceph@cephtest05:/etc/ceph$ sudo /etc/init.d/radosgw status
/usr/bin/radosgw is not running.
ceph@cephtest05:/etc/ceph$

ceph@cephtest05:/etc/ceph$ cat /var/log/ceph/radosgw.log
ceph@cephtest05:/etc/ceph$


OSD2:

ceph@cephtest06:/etc/ceph$ sudo /etc/init.d/radosgw start
Starting client.radosgw.gateway...
2013-09-25 14:03:01.235789 7f713d79d780 -1 WARNING: libcurl doesn't support 
curl_multi_wait()
2013-09-25 14:03:01.235797 7f713d79d780 -1 WARNING: cross zone / region 
transfer performance may be affected
ceph@cephtest06:/etc/ceph$

ceph@cephtest06:/etc/ceph$ sudo /etc/init.d/radosgw status
/usr/bin/radosgw is not running.
ceph@cephtest06:/etc/ceph$

ceph@cephtest06:/etc/ceph$ cat /var/log/ceph/radosgw.log
2013-09-25 14:03:01.235760 7f713d79d780  0 ceph version 0.67.3 
(408cd61584c72c0d97b774b3d8f95c6b1b06341a), process radosgw, pid 13187
2013-09-25 14:03:01.235789 7f713d79d780 -1 WARNING: libcurl doesn't support 
curl_multi_wait()
2013-09-25 14:03:01.235797 7f713d79d780 -1 WARNING: cross zone / region 
transfer performance may be affected
2013-09-25 14:03:01.245786 7f713d79d780  0 librados: client.radosgw.gateway 
authentication error (1) Operation not permitted
2013-09-25 14:03:01.246526 7f713d79d780 -1 Couldn't init storage provider 
(RADOS)
ceph@cephtest06:/etc/ceph$


For reference, I think cluster health is OK:

ceph@cephtest06:/etc/ceph$ sudo ceph status
  cluster a45e6e54-70ef-4470-91db-2152965deec5
   health HEALTH_WARN clock skew detected on mon.cephtest03, mon.cephtest04
   monmap e1: 3 mons at 
{cephtest02=10.0.0.2:6789/0,cephtest03=10.0.0.3:6789/0,cephtest04=10.0.0.4:6789/0},
 election epoch 6, quorum 0,1,2 cephtest02,cephtest03,cephtest04
   osdmap e9: 2 osds: 2 up, 2 in
    pgmap v439: 192 pgs: 192 active+clean; 0 bytes data, 72548 KB used, 1998 GB 
/ 1999 GB avail
   mdsmap e1: 0/0/1 up

ceph@cephtest06:/etc/ceph$ sudo ceph health
HEALTH_WARN clock skew detected on mon.cephtest03, mon.cephtest04
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to