Re: [ceph-users] newly osds dying (jewel 10.2.2)

2016-07-26 Thread Goncalo Borges
Hi cephers. I think this is solved. The issue is because of puppet. and the new interface naming of centos7. In our puppet configs we defined an iptable module which restricts access to the private ceph network based on src and on destiny interface. We had eth1 hardwired and in this new serv

[ceph-users] newly osds dying (jewel 10.2.2)

2016-07-26 Thread Goncalo Borges
Hi cephers... Our production cluster is running Jewel 10.2.2. We were running a production cluster with 8 servers each with 8 osds making a gran total of 64 osds. Each server also hosts 2 ssds for journals. Each sshd supports 4 journals. We had 1/3 of our osds above 80% occupied, and we decid