Re: [ceph-users] How are replicas spread in default crush configuration?

2016-11-23 Thread Samuel Just
On Wed, Nov 23, 2016 at 4:11 PM, Chris Taylor  wrote:
> Kevin,
>
> After changing the pool size to 3, make sure the min_size is set to 1 to
> allow 2 of the 3 hosts to be offline.

If you do this, the flip side is that while in that configuration
losing that single
host will render your data unrecoverable (writes were only witnessed by that
osd).

>
> http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
>
> How many MONs do you have and are they on the same OSD hosts? If you have 3
> MONs running on the OSD hosts and two go offline, you will not have a quorum
> of MONs and I/O will be blocked.
>
> I would also check your CRUSH map. I believe you want to make sure your
> rules have "step chooseleaf firstn 0 type host" and not "... type osd" so
> that replicas are on different hosts. I have not had to make that change
> before so you will want to read up on it first. Don't take my word for it.
>
> http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-map-parameters
>
> Hope that helps.
>
>
>
> Chris
>
>
>
> On 2016-11-23 1:32 pm, Kevin Olbrich wrote:
>
> Hi,
>
> just to make sure, as I did not find a reference in the docs:
> Are replicas spread across hosts or "just" OSDs?
>
> I am using a 5 OSD cluster (4 pools, 128 pgs each) with size = 2. Currently
> each OSD is a ZFS backed storage array.
> Now I installed a server which is planned to host 4x OSDs (and setting size
> to 3).
>
> I want to make sure we can resist two offline hosts (in terms of hardware).
> Is my assumption correct?
>
> Mit freundlichen Grüßen / best regards,
> Kevin Olbrich.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How are replicas spread in default crush configuration?

2016-11-23 Thread Chris Taylor
 

Kevin, 

After changing the pool size to 3, make sure the min_size is set to 1 to
allow 2 of the 3 hosts to be offline. 

http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
[2] 

How many MONs do you have and are they on the same OSD hosts? If you
have 3 MONs running on the OSD hosts and two go offline, you will not
have a quorum of MONs and I/O will be blocked. 

I would also check your CRUSH map. I believe you want to make sure your
rules have "step chooseleaf firstn 0 type host" and not "... type osd"
so that replicas are on different hosts. I have not had to make that
change before so you will want to read up on it first. Don't take my
word for it. 

http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-map-parameters
[3] 

Hope that helps. 

Chris 

On 2016-11-23 1:32 pm, Kevin Olbrich wrote: 

> Hi, 
> 
> just to make sure, as I did not find a reference in the docs: 
> Are replicas spread across hosts or "just" OSDs? 
> 
> I am using a 5 OSD cluster (4 pools, 128 pgs each) with size = 2. Currently 
> each OSD is a ZFS backed storage array. 
> Now I installed a server which is planned to host 4x OSDs (and setting size 
> to 3). 
> 
> I want to make sure we can resist two offline hosts (in terms of hardware). 
> Is my assumption correct? 
> 
> Mit freundlichen Grüßen / best regards,
> Kevin Olbrich. 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [1]
 

Links:
--
[1] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[2]
http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
[3]
http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-map-parameters___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How are replicas spread in default crush configuration?

2016-11-23 Thread Kevin Olbrich
Hi,

just to make sure, as I did not find a reference in the docs:
Are replicas spread across hosts or "just" OSDs?

I am using a 5 OSD cluster (4 pools, 128 pgs each) with size = 2. Currently
each OSD is a ZFS backed storage array.
Now I installed a server which is planned to host 4x OSDs (and setting size
to 3).

I want to make sure we can resist two offline hosts (in terms of hardware).
Is my assumption correct?

Mit freundlichen Grüßen / best regards,
Kevin Olbrich.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com