Kevin,
After changing the pool size to 3, make sure the min_size is set to 1 to
allow 2 of the 3 hosts to be offline.
http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
[2]
How many MONs do you have and are they on the same OSD hosts? If you
have 3 MONs running on the OSD hosts and two go offline, you will not
have a quorum of MONs and I/O will be blocked.
I would also check your CRUSH map. I believe you want to make sure your
rules have "step chooseleaf firstn 0 type host" and not "... type osd"
so that replicas are on different hosts. I have not had to make that
change before so you will want to read up on it first. Don't take my
word for it.
http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-map-parameters
[3]
Hope that helps.
Chris
On 2016-11-23 1:32 pm, Kevin Olbrich wrote:
> Hi,
>
> just to make sure, as I did not find a reference in the docs:
> Are replicas spread across hosts or "just" OSDs?
>
> I am using a 5 OSD cluster (4 pools, 128 pgs each) with size = 2. Currently
> each OSD is a ZFS backed storage array.
> Now I installed a server which is planned to host 4x OSDs (and setting size
> to 3).
>
> I want to make sure we can resist two offline hosts (in terms of hardware).
> Is my assumption correct?
>
> Mit freundlichen Grüßen / best regards,
> Kevin Olbrich.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [1]
Links:
------
[1] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[2]
http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
[3]
http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-map-parameters
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com