Hi Eugen
Please find the details below
root@meghdootctr1:/var/log/ceph# ceph -s
cluster:
id: c59da971-57d1-43bd-b2b7-865d392412a5
health: HEALTH_WARN
nodeep-scrub flag(s) set
544 pgs not deep-scrubbed in time
services:
mon: 3 daemons, quorum meghdootctr1,meghdootctr2,meghdootctr3 (age 5d)
mgr:
In a production setup of 36 OSDs( SAS disks) totalling 180 TB allocated to a
single Ceph Cluster with 3 monitors and 3 managers. There were 830 volumes and
VMs created in Openstack with Ceph as a backend. On Sep 21, users reported
slowness in accessing the VMs.
Analysing the logs lead us to