thanks, that should rule out a difference in bluestore
> > min_alloc_size, for example.
> >
> >> Below is pasted the ceph osd df tree output.
> > It looks like there is some pretty significant skew in terms of the
> > amount of bytes per active PG. If you issue "ceph
seems to make our 1.5x EC overhead a 1.68x overhead in
reality. Anyone have any ideas for why this is the case?
We also have a ceph_cluster_total_used_raw_bytes metric, I believe to be
close to data+metadata. Which is why I tried to show with
sum(ceph_bluefs_db_used_bytes). Is that correct?
I thought of that but it doesn't make much sense. AFAICT min_size should
block IO when i lose 3 osds, but it shouldn't effect the amount of the
stored data. Am i missing something?
On Tue, Nov 26, 2019 at 6:04 AM Konstantin Shalygin wrote:
> On 11/25/19 6:05 PM, Erdem Agaoglu wrot
That seems like it. Thanks a lot Serkan!
On 26 Nov 2019 Tue at 20:08 Serkan Çoban wrote:
> Maybe following link helps...
> https://www.spinics.net/lists/dev-ceph/msg00795.html
>
> On Tue, Nov 26, 2019 at 6:17 PM Erdem Agaoglu
> wrote:
> >
> > I thought of that but
Çoban wrote:
> Maybe following link helps...
> https://www.spinics.net/lists/dev-ceph/msg00795.html
>
> On Tue, Nov 26, 2019 at 6:17 PM Erdem Agaoglu
> wrote:
> >
> > I thought of that but it doesn't make much sense. AFAICT min_size should
> block IO when i l
_mon_vals failed to set public_network =
> > 192.168.109.0/24: Configuration option 'public_network' may not be
> > modified at runtime
> >
> > There are 2 things that I don't understand in these messages:
> >
> > 1. Why is it mentioning configuration option '