Re: [ceph-users] CephFS test-case

2013-09-06 Thread Sage Weil
[re-adding ceph-devel] On Sat, 7 Sep 2013, Nigel Williams wrote: > On Sat, Sep 7, 2013 at 1:27 AM, Sage Weil wrote: > > It sounds like the problem is cluster B's pools have too few PGs, making > > the data distribution get all out of whack. > > Agree, it was too few PGs, I have no re-adjusted a

Re: [ceph-users] CephFS test-case

2013-09-06 Thread Mark Nelson
On 09/06/2013 06:22 PM, Sage Weil wrote: [re-adding ceph-devel] On Sat, 7 Sep 2013, Nigel Williams wrote: On Sat, Sep 7, 2013 at 1:27 AM, Sage Weil wrote: It sounds like the problem is cluster B's pools have too few PGs, making the data distribution get all out of whack. Agree, it was too

Re: [ceph-users] CephFS test-case

2013-09-06 Thread Nigel Williams
One way might be to have a nag system, with a global flag that can turn nags off at the cluster level (for production deployments), but the nags are added to the cluster-state messages on a regular basis to remind operators that there is something to investigate. Having an indexed list of nags wou