On Thu, May 10, 2012 at 5:23 PM, Tommi Virtanen <t...@inktank.com> wrote:
> Good question! "ceph -s" will show you that. This is from a run where
> I ran "ceph osd out 1" on a cluster of 3 osds. See the active+clean
> counts going up and active+recovering counts going down, and the
> "degraded" percentage dropping. The last line is an example of an "all
> done" situation.

Oh, and if your cluster is busy enough that there's always some
rebalancing going on, you might never get to 100% active+clean. In
that case, I do believe that "ceph pg dump" probably contains all the
information needed, and --format=json makes it parseable, but it's
just not currently documented. We really should provide a good way of
accessing that information.

I filed http://tracker.newdream.net/issues/2394 to keep track of this task.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to