Re: [Qemu-devel] [PATCH 08/20] nfs: Handle failure for potentially large allocations

2014-05-22 Thread Stefan Hajnoczi
On Wed, May 21, 2014 at 06:28:06PM +0200, Kevin Wolf wrote:
 Some code in the block layer makes potentially huge allocations. Failure
 is not completely unexpected there, so avoid aborting qemu and handle
 out-of-memory situations gracefully.
 
 This patch addresses the allocations in the nfs block driver.
 
 Signed-off-by: Kevin Wolf kw...@redhat.com
 ---
  block/nfs.c | 6 +-
  1 file changed, 5 insertions(+), 1 deletion(-)

Reviewed-by: Stefan Hajnoczi stefa...@redhat.com




Re: [Qemu-devel] [PATCH 08/20] nfs: Handle failure for potentially large allocations

2014-05-22 Thread ronnie sahlberg
For this case and for the iscsi case, isn't it likely that once this
happens the guest is pretty much doomed since I/O to the disk will no
longer work ?

Should we also change the block layer so that IF *_readv/_writev fails
with -ENOMEM
then it should try again but break the request up into a chain of
smaller chunks ?



On Wed, May 21, 2014 at 9:28 AM, Kevin Wolf kw...@redhat.com wrote:
 Some code in the block layer makes potentially huge allocations. Failure
 is not completely unexpected there, so avoid aborting qemu and handle
 out-of-memory situations gracefully.

 This patch addresses the allocations in the nfs block driver.

 Signed-off-by: Kevin Wolf kw...@redhat.com
 ---
  block/nfs.c | 6 +-
  1 file changed, 5 insertions(+), 1 deletion(-)

 diff --git a/block/nfs.c b/block/nfs.c
 index 539bd95..e3d6216 100644
 --- a/block/nfs.c
 +++ b/block/nfs.c
 @@ -165,7 +165,11 @@ static int coroutine_fn nfs_co_writev(BlockDriverState 
 *bs,

  nfs_co_init_task(client, task);

 -buf = g_malloc(nb_sectors * BDRV_SECTOR_SIZE);
 +buf = g_try_malloc(nb_sectors * BDRV_SECTOR_SIZE);
 +if (buf == NULL) {
 +return -ENOMEM;
 +}
 +
  qemu_iovec_to_buf(iov, 0, buf, nb_sectors * BDRV_SECTOR_SIZE);

  if (nfs_pwrite_async(client-context, client-fh,
 --
 1.8.3.1





[Qemu-devel] [PATCH 08/20] nfs: Handle failure for potentially large allocations

2014-05-21 Thread Kevin Wolf
Some code in the block layer makes potentially huge allocations. Failure
is not completely unexpected there, so avoid aborting qemu and handle
out-of-memory situations gracefully.

This patch addresses the allocations in the nfs block driver.

Signed-off-by: Kevin Wolf kw...@redhat.com
---
 block/nfs.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/block/nfs.c b/block/nfs.c
index 539bd95..e3d6216 100644
--- a/block/nfs.c
+++ b/block/nfs.c
@@ -165,7 +165,11 @@ static int coroutine_fn nfs_co_writev(BlockDriverState *bs,
 
 nfs_co_init_task(client, task);
 
-buf = g_malloc(nb_sectors * BDRV_SECTOR_SIZE);
+buf = g_try_malloc(nb_sectors * BDRV_SECTOR_SIZE);
+if (buf == NULL) {
+return -ENOMEM;
+}
+
 qemu_iovec_to_buf(iov, 0, buf, nb_sectors * BDRV_SECTOR_SIZE);
 
 if (nfs_pwrite_async(client-context, client-fh,
-- 
1.8.3.1