We have an 84-osd cluster with volumes and images pools for OpenStack.  I
was having trouble with full osds, so I increased the pg count from the 128
default to 2700.  This balanced out the osds but the cluster is stuck at
15% degraded

http://hastebin.com/wixarubebe.dos

That's the output of ceph health detail.  I've never seen a pg with the
state active+remapped+wait_backfill+backfill_toofull.  Clearly I should
have increased the pg count more gradually, but here I am. I'm frozen,
afraid to do anything.

Any suggestions? Thanks.

--Greg Chavez
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to