Hi Stefan,
Any idea if the reads are constant or bursty? One cause of heavy reads is when rocksdb is compacting and has to read SST files from disk. It's also possible you could see heavy read traffic during writes if data has to be read from SST files rather than cache. It's possible this could be related to the osd_memory_autotune feature. It will try to keep OSD memory usage within a certain footprint (4GB by default) which supercedes the bluestore cache size (it automatically sets the cache size based on the osd_memory_target).
To see what's happening during compaction, you can run this script against one of your bluestore OSD logs:
https://github.com/ceph/cbt/blob/master/tools/ceph_rocksdb_log_parser.py Mark On 1/14/19 1:35 PM, Stefan Priebe - Profihost AG wrote:
Hi, while trying to upgrade a cluster from 12.2.8 to 12.2.10 i'm experience issues with bluestore osds - so i canceled the upgrade and all bluestore osds are stopped now. After starting a bluestore osd i'm seeing a lot of slow requests caused by very high read rates. Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 45,00 187,00 767,00 39,00 482040,00 8660,00 1217,62 58,16 74,60 73,85 89,23 1,24 100,00 it reads permanently with 500MB/s from the disk and can't service client requests. Overall client read rate is at 10.9MiB/s rd I can't reproduce this with 12.2.8. Is this a known bug / regression? Greets, Stefan _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com