When a BlockDriverState is associated with a storage controller DeviceState we expect guest I/O. Use this opportunity to bump the coroutine pool size by 64.
This patch ensures that the coroutine pool size scales with the number of drives attached to the guest. It should increase coroutine pool usage (which makes qemu_coroutine_create() fast) without hogging too much memory when fewer drives are attached. Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com> --- block.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/block.c b/block.c index f80e2b2..c8379ca 100644 --- a/block.c +++ b/block.c @@ -2093,6 +2093,9 @@ int bdrv_attach_dev(BlockDriverState *bs, void *dev) } bs->dev = dev; bdrv_iostatus_reset(bs); + + /* We're expecting I/O from the device so bump up coroutine pool size */ + qemu_coroutine_adjust_pool_size(64); return 0; } @@ -2112,6 +2115,7 @@ void bdrv_detach_dev(BlockDriverState *bs, void *dev) bs->dev_ops = NULL; bs->dev_opaque = NULL; bs->guest_block_size = 512; + qemu_coroutine_adjust_pool_size(-64); } /* TODO change to return DeviceState * when all users are qdevified */ -- 1.9.3