On Mon, Apr 01, 2019 at 02:16:52PM -0600, Alex Williamson wrote:

[...]

> @@ -1081,8 +1088,14 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
>               goto out_unlock;
>       }
>  
> +     if (!atomic_add_unless(&iommu->dma_avail, -1, 0)) {
> +             ret = -ENOSPC;
> +             goto out_unlock;
> +     }
> +
>       dma = kzalloc(sizeof(*dma), GFP_KERNEL);
>       if (!dma) {
> +             atomic_inc(&iommu->dma_avail);

This should be the only special path to revert the change.  Not sure
whether this can be avoided by simply using atomic_read() or even
READ_ONCE() (I feel like we don't need atomic ops with dma_avail
because we've had the mutex but it of course it doesn't hurt...) to
replace atomic_add_unless() above to check against zero then we do
+1/-1 in vfio_[un]link_dma() only.  But AFAICT this patch is correct.

Reviewed-by: Peter Xu <pet...@redhat.com>

Thanks,

-- 
Peter Xu

Reply via email to