Hi cephers.
I think this is solved.
The issue is because of puppet.
and the new interface naming of centos7.
In our puppet configs we defined an iptable module which restricts
access to the private ceph network based on src and on destiny
interface. We had eth1 hardwired and in this new
Hi cephers...
Our production cluster is running Jewel 10.2.2.
We were running a production cluster with 8 servers each with 8 osds making a
gran total of 64 osds. Each server also hosts 2 ssds for journals. Each sshd
supports 4 journals.
We had 1/3 of our osds above 80% occupied, and we
Thanks for the feedback.
I removed "ceph-deploy mon create + ceph-deploy gatherkeys."
And my system disk is sde.
As your opinion, the disk cannot be umounted when purgedata was run.
Is it bug on Ubuntu 16.04?
*$ ssh csAnt lsblk*
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:00
Hi,
first I’f have one remark. You run both a ceph-deploy mon create-initial then a
"ceph-deploy mon create + ceph-deploy gatherkeys". Choose one or the other not
both.
Then, I notice that you are zapping and deploying using drive /dev/sda which is
usually the system disks. So next question
Hi,
When I run below script to install Ceph (10.2.0), I met an error "no osds".
Hammer was installed by the script.
So I think I miss new thing, which was released since Hammer.
Do you know what I miss?
--- The script ---
#!/bin/sh
set -x
ceph-deploy new csElsa
echo "osd pool default size =