That seems like it. Thanks a lot Serkan!

On 26 Nov 2019 Tue at 20:08 Serkan Çoban <cobanser...@gmail.com> wrote:

> Maybe following link helps...
> https://www.spinics.net/lists/dev-ceph/msg00795.html
>
> On Tue, Nov 26, 2019 at 6:17 PM Erdem Agaoglu <erdem.agao...@gmail.com>
> wrote:
> >
> > I thought of that but it doesn't make much sense. AFAICT min_size should
> block IO when i lose 3 osds, but it shouldn't effect the amount of the
> stored data. Am i missing something?
> >
> > On Tue, Nov 26, 2019 at 6:04 AM Konstantin Shalygin <k0...@k0ste.ru>
> wrote:
> >>
> >> On 11/25/19 6:05 PM, Erdem Agaoglu wrote:
> >>
> >>
> >> What I can't find is the 138,509 G difference between the
> ceph_cluster_total_used_bytes and ceph_pool_stored_raw. This is not static
> BTW, checking the same data historically shows we have about 1.12x of what
> we expect. This seems to make our 1.5x EC overhead a 1.68x overhead in
> reality. Anyone have any ideas for why this is the case?
> >>
> >> May be min_size related? Because you are right, 6+3 is a 1.50, but 6+3
> (+1) is a your calculated 1.67.
> >>
> >>
> >>
> >> k
> >
> >
> >
> > --
> > erdem agaoglu
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
-- 
erdem agaoglu
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to