Although RamDiscardMgr can handle running into the maximum number of
DMA mappings by propagating errors when creating a DMA mapping, we want
to sanity check and warn the user early that there is a theoretical setup
issue and that virtio-mem might not be able to provide as much memory
towards a VM as desired.

As suggested by Alex, let's use the number of KVM memory slots to guess
how many other mappings we might see over time.

Cc: Paolo Bonzini <pbonz...@redhat.com>
Cc: "Michael S. Tsirkin" <m...@redhat.com>
Cc: Alex Williamson <alex.william...@redhat.com>
Cc: Dr. David Alan Gilbert <dgilb...@redhat.com>
Cc: Igor Mammedov <imamm...@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.li...@gmail.com>
Cc: Peter Xu <pet...@redhat.com>
Cc: Auger Eric <eric.au...@redhat.com>
Cc: Wei Yang <richard.weiy...@linux.alibaba.com>
Cc: teawater <teawat...@linux.alibaba.com>
Cc: Marek Kedzierski <mkedz...@redhat.com>
Signed-off-by: David Hildenbrand <da...@redhat.com>
---
 hw/vfio/common.c | 43 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 78be813a53..166ec6ec62 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -761,6 +761,49 @@ static void 
vfio_register_ram_discard_notifier(VFIOContainer *container,
                               vfio_ram_discard_notify_discard_all);
     rdmc->register_listener(rdm, section->mr, &vrdl->listener);
     QLIST_INSERT_HEAD(&container->vrdl_list, vrdl, next);
+
+    /*
+     * Sanity-check if we have a theoretically problematic setup where we could
+     * exceed the maximum number of possible DMA mappings over time. We assume
+     * that each mapped section in the same address space as a RamDiscardMgr
+     * section consumes exactly one DMA mapping, with the exception of
+     * RamDiscardMgr sections; i.e., we don't expect to have gIOMMU sections in
+     * the same address space as RamDiscardMgr sections.
+     *
+     * We assume that each section in the address space consumes one memslot.
+     * We take the number of KVM memory slots as a best guess for the maximum
+     * number of sections in the address space we could have over time,
+     * also consuming DMA mappings.
+     */
+    if (container->dma_max_mappings) {
+        unsigned int vrdl_count = 0, vrdl_mappings = 0, max_memslots = 512;
+
+#ifdef CONFIG_KVM
+        if (kvm_enabled()) {
+            max_memslots = kvm_get_max_memslots();
+        }
+#endif
+
+        QLIST_FOREACH(vrdl, &container->vrdl_list, next) {
+            hwaddr start, end;
+
+            start = QEMU_ALIGN_DOWN(vrdl->offset_within_address_space,
+                                    vrdl->granularity);
+            end = ROUND_UP(vrdl->offset_within_address_space + vrdl->size,
+                           vrdl->granularity);
+            vrdl_mappings += (end - start) / vrdl->granularity;
+            vrdl_count++;
+        }
+
+        if (vrdl_mappings + max_memslots - vrdl_count >
+            container->dma_max_mappings) {
+            warn_report("%s: possibly running out of DMA mappings. E.g., try"
+                        " increasing the 'block-size' of virtio-mem devies."
+                        " Maximum possible DMA mappings: %d, Maximum possible"
+                        " memslots: %d", __func__, container->dma_max_mappings,
+                        max_memslots);
+        }
+    }
 }
 
 static void vfio_unregister_ram_discard_listener(VFIOContainer *container,
-- 
2.29.2


Reply via email to