[ceph-users] Re: EC pool used space high

2019-11-27 Thread Erdem Agaoglu
Hi again,

Even with this, our 6+3 EC pool with default bluestore_min_alloc_size 64KiB
filled with 4MiB RBD objects should not take 1.67x space. It should be
around 1.55x. There still is a 12% un-accounted overhead. Could there be
something else too?

Best,

On Tue, Nov 26, 2019 at 8:08 PM Serkan Çoban  wrote:

> Maybe following link helps...
> https://www.spinics.net/lists/dev-ceph/msg00795.html
>
> On Tue, Nov 26, 2019 at 6:17 PM Erdem Agaoglu 
> wrote:
> >
> > I thought of that but it doesn't make much sense. AFAICT min_size should
> block IO when i lose 3 osds, but it shouldn't effect the amount of the
> stored data. Am i missing something?
> >
> > On Tue, Nov 26, 2019 at 6:04 AM Konstantin Shalygin 
> wrote:
> >>
> >> On 11/25/19 6:05 PM, Erdem Agaoglu wrote:
> >>
> >>
> >> What I can't find is the 138,509 G difference between the
> ceph_cluster_total_used_bytes and ceph_pool_stored_raw. This is not static
> BTW, checking the same data historically shows we have about 1.12x of what
> we expect. This seems to make our 1.5x EC overhead a 1.68x overhead in
> reality. Anyone have any ideas for why this is the case?
> >>
> >> May be min_size related? Because you are right, 6+3 is a 1.50, but 6+3
> (+1) is a your calculated 1.67.
> >>
> >>
> >>
> >> k
> >
> >
> >
> > --
> > erdem agaoglu
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
erdem agaoglu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: EC pool used space high

2019-11-26 Thread Erdem Agaoglu
That seems like it. Thanks a lot Serkan!

On 26 Nov 2019 Tue at 20:08 Serkan Çoban  wrote:

> Maybe following link helps...
> https://www.spinics.net/lists/dev-ceph/msg00795.html
>
> On Tue, Nov 26, 2019 at 6:17 PM Erdem Agaoglu 
> wrote:
> >
> > I thought of that but it doesn't make much sense. AFAICT min_size should
> block IO when i lose 3 osds, but it shouldn't effect the amount of the
> stored data. Am i missing something?
> >
> > On Tue, Nov 26, 2019 at 6:04 AM Konstantin Shalygin 
> wrote:
> >>
> >> On 11/25/19 6:05 PM, Erdem Agaoglu wrote:
> >>
> >>
> >> What I can't find is the 138,509 G difference between the
> ceph_cluster_total_used_bytes and ceph_pool_stored_raw. This is not static
> BTW, checking the same data historically shows we have about 1.12x of what
> we expect. This seems to make our 1.5x EC overhead a 1.68x overhead in
> reality. Anyone have any ideas for why this is the case?
> >>
> >> May be min_size related? Because you are right, 6+3 is a 1.50, but 6+3
> (+1) is a your calculated 1.67.
> >>
> >>
> >>
> >> k
> >
> >
> >
> > --
> > erdem agaoglu
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
-- 
erdem agaoglu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: EC pool used space high

2019-11-26 Thread Serkan Çoban
Maybe following link helps...
https://www.spinics.net/lists/dev-ceph/msg00795.html

On Tue, Nov 26, 2019 at 6:17 PM Erdem Agaoglu  wrote:
>
> I thought of that but it doesn't make much sense. AFAICT min_size should 
> block IO when i lose 3 osds, but it shouldn't effect the amount of the stored 
> data. Am i missing something?
>
> On Tue, Nov 26, 2019 at 6:04 AM Konstantin Shalygin  wrote:
>>
>> On 11/25/19 6:05 PM, Erdem Agaoglu wrote:
>>
>>
>> What I can't find is the 138,509 G difference between the 
>> ceph_cluster_total_used_bytes and ceph_pool_stored_raw. This is not static 
>> BTW, checking the same data historically shows we have about 1.12x of what 
>> we expect. This seems to make our 1.5x EC overhead a 1.68x overhead in 
>> reality. Anyone have any ideas for why this is the case?
>>
>> May be min_size related? Because you are right, 6+3 is a 1.50, but 6+3 (+1) 
>> is a your calculated 1.67.
>>
>>
>>
>> k
>
>
>
> --
> erdem agaoglu
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: EC pool used space high

2019-11-26 Thread Erdem Agaoglu
I thought of that but it doesn't make much sense. AFAICT min_size should
block IO when i lose 3 osds, but it shouldn't effect the amount of the
stored data. Am i missing something?

On Tue, Nov 26, 2019 at 6:04 AM Konstantin Shalygin  wrote:

> On 11/25/19 6:05 PM, Erdem Agaoglu wrote:
>
>
> What I can't find is the 138,509 G difference between the
> ceph_cluster_total_used_bytes and ceph_pool_stored_raw. This is not static
> BTW, checking the same data historically shows we have about 1.12x of what
> we expect. This seems to make our 1.5x EC overhead a 1.68x overhead in
> reality. Anyone have any ideas for why this is the case?
>
> May be min_size related? Because you are right, 6+3 is a 1.50, but 6+3
> (+1) is a your calculated 1.67.
>
>
>
> k
>


-- 
erdem agaoglu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: EC pool used space high

2019-11-25 Thread Konstantin Shalygin

On 11/25/19 6:05 PM, Erdem Agaoglu wrote:


What I can't find is the 138,509 G difference between the 
ceph_cluster_total_used_bytes and ceph_pool_stored_raw. This is not 
static BTW, checking the same data historically shows we have about 
1.12x of what we expect. This seems to make our 1.5x EC overhead a 
1.68x overhead in reality. Anyone have any ideas for why this is the case?


May be min_size related? Because you are right, 6+3 is a 1.50, but 6+3 
(+1) is a your calculated 1.67.




k

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io