Have you tried specifying the socket path in your Ceph configuration file?
On Sat, May 17, 2014 at 9:38 AM, reistlin87 reistli...@yandex.ru wrote:
Hi all! Sorry for my english, I am russian)
We get the same error on diffrent linux distro(CentOS 6.4 SuSe 11),
diffrent ceph version
In case if someone gets into the same situation as mine:
Solution of the problem was to uncomment enable_v2_api = True line in
glance-api.conf
o_0
2014-05-15 23:38 GMT+03:00 Sebastien Han sebastien@enovance.com:
Glad to hear that it works now :)
Sébastien Han
Cloud Engineer
Sure you can change ruleset of pools. With different ruleset
specified, data will be put into different places according to the
CRUSH algorithm.
On Thu, May 15, 2014 at 8:26 AM, Cao, Buddy buddy@intel.com wrote:
Hi,
I notice after create ceph cluster, the ruleset for the default pools
Yes, i have tried. I have found,that Ubuntu Server create admin socket with right name. All other distros( i have tasted CentOS 6.5, OpenSUSE 13.1, Debian 7.5) create wrong admin socket name with non-default cluster name. Is it bug, or it is my mistake? 18.05.2014, 18:30, "John Wilkins"
Wondering what is the status of this fix
https://review.openstack.org/#/c/46879/? Which release has it?
— Yuming
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
/var/lib/ceph/osd/ceph-1 should be empty before --mkfs.
On Wed, May 14, 2014 at 1:07 PM, Srinivasa Rao Ragolu
srag...@mvista.com wrote:
Hi All,
I am following manual steps to create osd node.
while executing below command, i am facing error like below
#ceph-osd -i 1 --mkfs --mkkey
There should be rbd codes in the running kernel. Check it by cat
/boot/config-`uname -r` | grep CONFIG_BLK_DEV_RBD. Result should be
CONFIG_BLK_DEV_RBD=m or CONFIG_BLK_DEV_RBD=y.
On Fri, May 9, 2014 at 3:01 PM, Ease Lu eas...@gmail.com wrote:
Hi All,
I run the command:
[ceph@ceph-client