On 27/02/2015, at 17.04, Udo Lembke <ulem...@polarzone.de> wrote:

> ceph health detail
> HEALTH_WARN pool ssd-archiv has too few pgs
Slightly different I had an issue with my Ceph Cluster underneath a PVE cluster 
yesterday.

Had two Ceph pools for RBD virt disks, vm_images (boot hdd images) + rbd_data 
(extra hdd images).

Then while adding pools for a rados GW (.rgw.*) suddenly health status said 
that my vm_images pool had too few PGs, thus I ran:

ceph osd pool set vm_images pg_num <larger_number>
ceph osd pool set vm_images pgp_num <larger_number>

Kicking off a 20 min rebalancing with a lot of IO in the Ceph Cluster, 
eventually Ceph Cluster was fine again, only almost all my PVE VMs ended up in 
stopped state, wondering why, a watchdog thingy maybe...

/Steffen


Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to