Re: [ceph-users] How does CEPH calculates PGs per OSD for erasure coded (EC) pools?

2019-04-29 Thread Christian Wuerdig
On Sun, 28 Apr 2019 at 21:45, Igor Podlesny wrote: > On Sun, 28 Apr 2019 at 16:14, Paul Emmerich > wrote: > > Use k+m for PG calculation, that value also shows up as "erasure size" > > in ceph osd pool ls detail > > So does it mean that for PG calculation those 2 pools are equivalent: > > 1) EC(

Re: [ceph-users] How does CEPH calculates PGs per OSD for erasure coded (EC) pools?

2019-04-28 Thread Igor Podlesny
On Sun, 28 Apr 2019 at 16:14, Paul Emmerich wrote: > Use k+m for PG calculation, that value also shows up as "erasure size" > in ceph osd pool ls detail So does it mean that for PG calculation those 2 pools are equivalent: 1) EC(4, 2) 2) replicated, size 6 ? Sounds weird to be honest. Replicate

Re: [ceph-users] How does CEPH calculates PGs per OSD for erasure coded (EC) pools?

2019-04-28 Thread Paul Emmerich
Use k+m for PG calculation, that value also shows up as "erasure size" in ceph osd pool ls detail The important thing here is on how many OSDs the PG shows up. And the EC PG shows up on all k+m OSDs. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io

[ceph-users] How does CEPH calculates PGs per OSD for erasure coded (EC) pools?

2019-04-28 Thread Igor Podlesny
For replicated pools (w/o rounding to nearest power of two) overall PGs number is calculated so: Pools_PGs = 100 * (OSDs / Pool_Size), where 100 -- target number of PGs per single OSD related to that pool, Pool_Size -- factor showing how much raw storage would in fact be used to store