The branch main has been updated by vangyzen:

URL: 
https://cgit.FreeBSD.org/src/commit/?id=490b09f240065d7ef61f68ec1bf134d729cfad28

commit 490b09f240065d7ef61f68ec1bf134d729cfad28
Author:     Eric van Gyzen <[email protected]>
AuthorDate: 2022-03-07 17:12:15 +0000
Commit:     Eric van Gyzen <[email protected]>
CommitDate: 2022-03-26 01:10:38 +0000

    uma_zalloc_domain: call uma_zalloc_debug in multi-domain path
    
    It was only called in the non-NUMA and single-domain paths.
    Some of its assertions were duplicated in uma_zalloc_domain,
    but some things were missed, especially memguard.
    
    Reviewed by:    markj, rstone
    MFC after:      1 week
    Sponsored by:   Dell EMC Isilon
    Differential Revision:  https://reviews.freebsd.org/D34472
---
 sys/vm/uma_core.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/sys/vm/uma_core.c b/sys/vm/uma_core.c
index 9fd2fd5e5d03..77182e460a54 100644
--- a/sys/vm/uma_core.c
+++ b/sys/vm/uma_core.c
@@ -3816,12 +3816,6 @@ uma_zalloc_domain(uma_zone_t zone, void *udata, int 
domain, int flags)
        CTR4(KTR_UMA, "uma_zalloc_domain zone %s(%p) domain %d flags %d",
            zone->uz_name, zone, domain, flags);
 
-       if (flags & M_WAITOK) {
-               WITNESS_WARN(WARN_GIANTOK | WARN_SLEEPOK, NULL,
-                   "uma_zalloc_domain: zone \"%s\"", zone->uz_name);
-       }
-       KASSERT(curthread->td_critnest == 0 || SCHEDULER_STOPPED(),
-           ("uma_zalloc_domain: called with spinlock or critical section 
held"));
        KASSERT((zone->uz_flags & UMA_ZONE_SMR) == 0,
            ("uma_zalloc_domain: called with SMR zone."));
 #ifdef NUMA
@@ -3831,6 +3825,11 @@ uma_zalloc_domain(uma_zone_t zone, void *udata, int 
domain, int flags)
        if (vm_ndomains == 1)
                return (uma_zalloc_arg(zone, udata, flags));
 
+#ifdef UMA_ZALLOC_DEBUG
+       if (uma_zalloc_debug(zone, &item, udata, flags) == EJUSTRETURN)
+               return (item);
+#endif
+
        /*
         * Try to allocate from the bucket cache before falling back to the keg.
         * We could try harder and attempt to allocate from per-CPU caches or

Reply via email to