Re: [ceph-users] ceph health related message

2014-09-22 Thread Sean Sullivan
I had this happen to me as well. Turned out to be a connlimit thing for me.
I would check dmesg/kernel log and see if you see any conntrack limit
reached connection dropped messages then increase connlimit. Odd as I
connected over ssh for this but I can't deny syslog.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph health related message

2014-09-19 Thread BG
I think you may be hitting a firewall issue on port 6789, I had a similar issue
recently.

The quick start preflight guide has been updated very recently for information
on opening the required ports for firewalld or iptables, see link below:
http://ceph.com/docs/master/start/quick-start-preflight/#open-required-ports


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph health related message

2014-09-18 Thread shiva rkreddy
Hi,

I've setup a cluster with 3 monitors and 2 OSD nodes with 2 disks
each.Cluster is in active+clean state. But, "ceph -s" keeps throwing the
following message, every other time "ceph -s"  is run.

 #ceph -s
2014-09-19 04:13:07.116662 7fc88c3f9700  0 -- :/1011833 >> *192.168.240.200
:*6789/0 pipe(0x7fc890021200 sd=3 :0 s=1 pgs=0 cs=0
l=1 c=0x7fc890021470).fault

If ceph -s is run from the same IP that is listed above, ceph -s doesn't
throw the message, not even once.

Appreciate your suggestions.

Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com