How do I check the full ratio and nearfull ratio of a running cluster? I know i can set 'mon osd full ratio' and 'mon osd nearfull ratio' in the [global] setting of ceph.conf. But things work fine without those lines (uses defaults, obviously).
They can also be changed with `ceph tell mon.* injectargs "--mon_osd_full_ratio .##` and `ceph tell mon.* injectargs "--mon_osd_nearfull_ratio .##`, in which case the running cluster's notion of full/nearfull wouldn't match ceph.conf. How do I have monitors report the values they're currently running with? (i.e. is there something like `ceph tell mon.* dumpargs...`?) It seems like this should be a pretty basic question, but my Googlefoo is failing me this morning. For those who find this post and want to check how full their OSDs are rather than checking the full/nearfull limits, `ceph osd df tree` seems to be the hot ticket. And as long as I'm posting, I may as well get my next question out of the way. My minimally used 4-node, 16 OSD test cluster looks like this: # ceph osd df tree .... MIN/MAX VAR: 0.75/1.31 STDDEV: 0.84 When should one be concerned about imbalance? What values for min/max/stddev represent problems where reweighing an OSD (or other action) is What sort of advisable? Is that the purpose of nearfull or does one need to monitor individual OSDs too? -- Adam Carheden _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com