On 07/18/2014 09:54 AM, Andrey Korolyov wrote:
On Fri, Jul 18, 2014 at 6:58 PM, Chris Friesen
<chris.frie...@windriver.com> wrote:
Hi,
I've recently run up against an interesting issue where I had a number of
guests running and when I started doing heavy disk I/O on a virtio disk
(backed via ceph rbd) the memory consumption spiked and triggered the
OOM-killer.
I want to reserve some memory for I/O, but I don't know how much it can use
in the worst-case.
Is there a limit on the number of in-flight I/O operations? (Preferably as
a configurable option, but even hard-coded would be good to know as well.)
Thanks,
Chris
Hi, are you using per-vm cgroups or it was happened on bare system?
Ceph backend have writeback cache setting, may be you hitting it but
it must be set enormously large then.
This is without cgroups. (I think we had tried cgroups and ran into
some issues.) Would cgroups even help with iSCSI/rbd/etc?
The "-drive" parameter in qemu was using "cache=none" for the VMs in
question. But I'm assuming it keeps the buffer around until acked by
the far end in order to be able to handle retries.
Chris