Re: [ceph-users] 答复: 答复: too many PGs per OSD (307 > max 300)

2016-07-31 Thread Christian Balzer
Hello, On Fri, 29 Jul 2016 04:46:54 + zhu tong wrote: > Right, that was the one that I calculated the osd_pool_default_pg_num in > our test cluster. > > > 7 OSD, 11 pools, osd_pool_default_pg_num is calculated to be 256, but when > ceph status shows > Already wrong, that default is

Re: [ceph-users] 答复: 答复: too many PGs per OSD (307 > max 300)

2016-07-29 Thread Chengwei Yang
On Fri, Jul 29, 2016 at 04:46:54AM +, zhu tong wrote: > Right, that was the one that I calculated the osd_pool_default_pg_num in our > test cluster. > > > 7 OSD, 11 pools, osd_pool_default_pg_num is calculated to be 256, but when > ceph > status shows > > health HEALTH_WARN >

[ceph-users] 答复: 答复: too many PGs per OSD (307 > max 300)

2016-07-28 Thread zhu tong
Right, that was the one that I calculated the osd_pool_default_pg_num in our test cluster. 7 OSD, 11 pools, osd_pool_default_pg_num is calculated to be 256, but when ceph status shows health HEALTH_WARN too many PGs per OSD (5818 > max 300) monmap e1: 1 mons at