My first suspicion would be the HBA. Are you using a RAID HBA? If so I
suggest checking the status of your BBU/FBWC and cache policy.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I'm using bcache (starting around the middle of December...before that
see way higher await) for all the 12 hdds on the 2 SSDs, and NVMe for
journals. (and some months ago I changed all the 2TB disks to 6TB and
added ceph4,5)
Here's my iostat in ganglia:
just raw per disk await
http://www.bro
Hello list,
Just curious if anyone has ever seen this behavior and might have some
ideas on how to troubleshoot it.
We're seeing very high iowait in iostat across all OSD's in on a single OSD
host. It's very spiky - dropping to zero and then shooting up to as high as
400 in some cases. Despite th