The Tuesday 03 Jun 2014 à 15:10:53 (+0200), Kevin Wolf wrote :
> Some code in the block layer makes potentially huge allocations. Failure
> is not completely unexpected there, so avoid aborting qemu and handle
> out-of-memory situations gracefully.
> 
> This patch addresses the allocations in the qed block driver.
> 
> Signed-off-by: Kevin Wolf <kw...@redhat.com>
> Reviewed-by: Stefan Hajnoczi <stefa...@redhat.com>
> ---
>  block/qed-check.c | 7 +++++--
>  block/qed.c       | 6 +++++-
>  2 files changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/block/qed-check.c b/block/qed-check.c
> index b473dcd..40a882c 100644
> --- a/block/qed-check.c
> +++ b/block/qed-check.c
> @@ -227,8 +227,11 @@ int qed_check(BDRVQEDState *s, BdrvCheckResult *result, 
> bool fix)
>      };
>      int ret;
>  
> -    check.used_clusters = g_malloc0(((check.nclusters + 31) / 32) *
> -                                       sizeof(check.used_clusters[0]));
> +    check.used_clusters = g_try_malloc0(((check.nclusters + 31) / 32) *
> +                                        sizeof(check.used_clusters[0]));
> +    if (check.nclusters && check.used_clusters == NULL) {
> +        return -ENOMEM;
> +    }
>  
>      check.result->bfi.total_clusters =
>          (s->header.image_size + s->header.cluster_size - 1) /
> diff --git a/block/qed.c b/block/qed.c
> index c130e42..f0943d6 100644
> --- a/block/qed.c
> +++ b/block/qed.c
> @@ -1208,7 +1208,11 @@ static void qed_aio_write_inplace(QEDAIOCB *acb, 
> uint64_t offset, size_t len)
>          struct iovec *iov = acb->qiov->iov;
>  
>          if (!iov->iov_base) {
> -            iov->iov_base = qemu_blockalign(acb->common.bs, iov->iov_len);
> +            iov->iov_base = qemu_try_blockalign(acb->common.bs, 
> iov->iov_len);
> +            if (iov->iov_base == NULL) {
> +                qed_aio_complete(acb, -ENOMEM);
> +                return;
> +            }
>              memset(iov->iov_base, 0, iov->iov_len);
>          }
>      }
> -- 
> 1.8.3.1
> 
Reviewed-by: Benoit Canet <ben...@irqsave.net>

Reply via email to