On 8/31/2012 7:11 AM, Xiaopong Tran wrote:
Hi,

Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost empty.

I can't find anything wrong from the crush map, it's just the
default for now. Attached is the crush map.

Have you been reweight-ing osds? I went round and round with my cluster a few days ago reloading different crush maps only to find that it re-injecting a crush map didn't seem to overwrite reweights.

Take a look at `ceph osd tree` to see if the reweight column matches the weight column.

Note: I'm new at this. This is my experience only, with 0.48.1, and may not be entirely correct.

--
Andrew Thompson
http://aktzero.com/

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to