> -----Original Message-----
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Ja. 
> C.A.
> Sent: 23 September 2016 09:50
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] rbd pool:replica size choose: 2 vs 3
> 
> Hi
> 
> with rep_size=2 and min_size=2, what drawbacks are removed compared with
> rep_size=2 and min_size=1?

If you lose a disk, everything will stop working until the affected PG's are at 
size=2 again.

> 
> thx
> J.
> 
> On 23/09/16 10:07, Wido den Hollander wrote:
> >> Op 23 september 2016 om 10:04 schreef mj <li...@merit.unu.edu>:
> >>
> >>
> >> Hi,
> >>
> >> On 09/23/2016 09:41 AM, Dan van der Ster wrote:
> >>>> If you care about your data you run with size = 3 and min_size = 2.
> >>>>
> >>>> Wido
> >> We're currently running with min_size 1. Can we simply change this,
> >> online, with:
> >>
> >> ceph osd pool set vm-storage min_size 2
> >>
> >> and expect everything to continue running?
> >>
> > Yes, it will. No rebalance will happen. min_size = 2 just tells Ceph that 2 
> > replicas need to be online for I/O (Read and Write)
to
> continue.
> >
> > Wido
> >
> >> (our cluster is HEALTH_OK, enough disk space, etc, etc)
> >>
> >> MJ
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to