Re: Deep-Scrub and High Read Latency with QEMU/RBD

2013-09-11 Thread Mike Dawson
I created Issue #6278 (http://tracker.ceph.com/issues/6278) to track this issue. Thanks, Mike Dawson On 8/30/2013 1:52 PM, Andrey Korolyov wrote: On Fri, Aug 30, 2013 at 9:44 PM, Mike Dawson wrote: Andrey, I use all the defaults: # ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config

Re: Deep-Scrub and High Read Latency with QEMU/RBD

2013-08-30 Thread Andrey Korolyov
On Fri, Aug 30, 2013 at 9:44 PM, Mike Dawson wrote: > Andrey, > > I use all the defaults: > > # ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep scrub > "osd_scrub_thread_timeout": "60", > "osd_scrub_finalize_thread_timeout": "600", > "osd_max_scrubs": "1", This one. I

Re: Deep-Scrub and High Read Latency with QEMU/RBD

2013-08-30 Thread Mike Dawson
Andrey, I use all the defaults: # ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep scrub "osd_scrub_thread_timeout": "60", "osd_scrub_finalize_thread_timeout": "600", "osd_max_scrubs": "1", "osd_scrub_load_threshold": "0.5", "osd_scrub_min_interval": "86400", "osd

Re: Deep-Scrub and High Read Latency with QEMU/RBD

2013-08-30 Thread Andrey Korolyov
You may want to reduce scrubbing pgs per osd to 1 using config option and check the results. On Fri, Aug 30, 2013 at 8:03 PM, Mike Dawson wrote: > We've been struggling with an issue of spikes of high i/o latency with > qemu/rbd guests. As we've been chasing this bug, we've greatly improved the >

Deep-Scrub and High Read Latency with QEMU/RBD

2013-08-30 Thread Mike Dawson
We've been struggling with an issue of spikes of high i/o latency with qemu/rbd guests. As we've been chasing this bug, we've greatly improved the methods we use to monitor our infrastructure. It appears that our RBD performance chokes in two situations: - Deep-Scrub - Backfill/recovery In th