Hi Don,
ceph.conf is readable by all users

Thanks

Song


On Wed, Dec 18, 2013 at 10:19 AM, Don Talton (dotalton)
<dotal...@cisco.com>wrote:

>  Check that cinder has access to read your ceph.conf file. I’ve had to
> 644 mine.
>
>
>
> *From:* ceph-users-boun...@lists.ceph.com [mailto:
> ceph-users-boun...@lists.ceph.com] *On Behalf Of *bigbird Lim
> *Sent:* Wednesday, December 18, 2013 10:19 AM
> *To:* ceph-users@lists.ceph.com
> *Subject:* [ceph-users] Error connecting to ceph cluster in openstack
> cinder
>
>
>
> hi,
>
> I am trying to get ceph working with cinder. I have an existing openstack
> setup with one nova-controller and five compute-nodes. I have setup another
> three separate servers as a ceph cluster. Following the instruction at
> http://ceph.com/docs/master/rbd/rbd-openstack/, I am getting this error
> when starting cinder-volume:
>
>
>
>
>
> 2013-12-18 09:06:49.756 12380 AUDIT cinder.service [-] Starting
> cinder-volume node (version 2013.2)
>
> 2013-12-18 09:06:50.286 12380 INFO cinder.openstack.common.rpc.common
> [req-925fa7e8-1ccf-474d-a3a8-646e0f9ec93e None None] Connected to AMQP
> server on localhost:5672
>
> 2013-12-18 09:06:50.297 12380 INFO cinder.volume.manager
> [req-925fa7e8-1ccf-474d-a3a8-646e0f9ec93e None None] Starting volume driver
> RBDDriver (1.1.0)
>
> 2013-12-18 09:06:50.316 12380 ERROR cinder.volume.drivers.rbd
> [req-925fa7e8-1ccf-474d-a3a8-646e0f9ec93e None None] error connecting to
> ceph cluster
>
> 2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd Traceback
> (most recent call last):
>
> 2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd   File
> "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 262,
> in check_for_setup_error
>
> 2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd     with
> RADOSClient(self):
>
> 2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd   File
> "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 234,
> in __init__
>
> 2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd
> self.cluster, self.ioctx = driver._connect_to_rados(pool)
>
> 2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd   File
> "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 282,
> in _connect_to_rados
>
> 2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd
> client.connect()
>
> 2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd   File
> "/usr/lib/python2.7/dist-packages/rados.py", line 367, in connect
>
> 2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd     raise
> make_ex(ret, "error calling connect")
>
> 2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd
> ObjectNotFound: error calling connect
>
> 2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd
>
> 2013-12-18 09:06:50.319 12380 ERROR cinder.volume.manager
> [req-925fa7e8-1ccf-474d-a3a8-646e0f9ec93e None None] Error encountered
> during initialization of driver: RBDDriver
>
> 2013-12-18 09:06:50.319 12380 ERROR cinder.volume.manager
> [req-925fa7e8-1ccf-474d-a3a8-646e0f9ec93e None None] Bad or unexpected
> response from the storage volume backend API: error connecting to ceph
> cluster
>
> 2013-12-18 09:06:50.319 12380 TRACE cinder.volume.manager Traceback (most
> recent call last):
>
> 2013-12-18 09:06:50.319 12380 TRACE cinder.volume.manager   File
> "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 190, in
> init_host
>
> 2013-12-18 09:06:50.319 12380 TRACE cinder.volume.manager
> self.driver.check_for_setup_error()
>
> 2013-12-18 09:06:50.319 12380 TRACE cinder.volume.manager   File
> "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 267,
> in check_for_setup_error
>
> 2013-12-18 09:06:50.319 12380 TRACE cinder.volume.manager     raise
> exception.VolumeBackendAPIException(data=msg)
>
> 2013-12-18 09:06:50.319 12380 TRACE cinder.volume.manager
> VolumeBackendAPIException: Bad or unexpected response from the storage
> volume backend API: error connecting to ceph cluster
>
> 2013-12-18 09:06:50.319 12380 TRACE cinder.volume.manager
>
>
>
> This is my cinder.conf file:
>
> cat /etc/cinder/cinder.conf
>
> [DEFAULT]
>
> rootwrap_config=/etc/cinder/rootwrap.conf
>
> sql_connection = mysql://cinderUser:cinderPass@10.193.0.120/cinder
>
> api_paste_config = /etc/cinder/api-paste.ini
>
> iscsi_helper=ietadm
>
> volume_name_template = volume-%s
>
> volume_group = cinder-volumes
>
> verbose = True
>
> auth_strategy = keystone
>
> iscsi_ip_address=10.193.0.120
>
> volume_driver=cinder.volume.drivers.rbd.RBDDriver
>
> rbd_pool=volumes
>
> glance_api_version=2
>
> rdb_user=volumes
>
> rdb_secret_uuid=19365acb-10b4-44c9-9a28-f948e8128e91
>
>
>
> and ceph.conf file
>
> [global]
>
> fsid = 633accd0-dd09-4d97-ab40-2aca79f44d1c
>
> mon_initial_members = ceph-1
>
> mon_host = 10.193.0.111
>
> auth_supported = cephx
>
> osd_journal_size = 1024
>
> filestore_xattr_use_omap = true
>
>
>
> Thanks for the help
>
>
>
> Song
>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to