On 13/07/14 17:07, Andrija Panic wrote:
Hi,

Sorry to bother, but I have urgent situation: upgraded CEPH from 0.72 to
0.80 (centos 6.5), and now all my CloudStack HOSTS can not connect.

I did basic "yum update ceph" on the first MON leader, and all CEPH
services on that HOST, have been restarted - done same on other CEPH
nodes (I have 1MON + 2 OSD per physical host), then I have set variables
to optimal with "ceph osd crush tunables optimal" and after some
rebalancing, ceph shows HEALTH_OK.

Also, I can create new images with qemu-img -f rbd rbd:/cloudstack

Libvirt 1.2.3 was compiled while ceph was 0.72, but I got instructions
from Wido that I don't need to REcompile now with ceph 0.80...

Libvirt logs:

libvirt: Storage Driver error : Storage pool not found: no storage pool
with matching uuid ‡Îhyš<JŠ~`a*×

Note there are some strange "uuid" - not sure what is happening ?

Did I forget to do something after CEPH upgrade ?

Have you got any ceph logs to examine on the host running libvirt? When I try to connect a v0.72 client to v0.81 cluster I get:

2014-07-13 18:21:23.860898 7fc3bd2ca700 0 -- 192.168.122.41:0/1002012 >> 192.168.122.21:6789/0 pipe(0x7fc3c00241f0 sd=3 :49451 s=1 pgs=0 cs=0 l=1 c=0x7fc3c0024450).connect protocol feature mismatch, my fffffffff < peer 5fffffffff missing 5000000000

Regards

Mark

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to