On Wed, Feb 11, 2026 at 10:50 AM Claudio Jeker <[email protected]> wrote:
>
> On Wed, Feb 11, 2026 at 10:24:20AM -0300, K R wrote:
> > Same panic, this time with show malloc included.  Please let me know
> > if you need additional ddb commands next time.
> >
> > Thanks,
> > --Kor
> >
> > ddb{1}> show panic
> > ddb{1}> *cpu0: malloc: out of space in kmem_map
>
>
> Something is using all memory in kmem_map and then the system goes boom.
> It is not malloc, the show malloc does not show any bucket that consumes a
> lot of mem.
>
> show all pools is another place memory may hide. Since multi page pools

Thanks, I'll submit show all pools (and perhaps show uvmexp?) -- the
machine paniced again, waiting for the ddb commands to be run by the
remote admin.

> use the kmem_map as well.
>
> You can actually watch this during runtime and look if something is slowly
> growing upwards. vmstat -m is great for that.

It may be unrelated, but it caught my eye that the only pool with Fail
request if this:

Name        Size Requests Fail    InUse Pgreq Pgrel Npage Hiwat Minpg Maxpg Idle
pfstate      384 27749088 7613228  7622 391809 390986 823 10001     0     8    8

dmesg was showing:
uvm_mapent_alloc: out of static map entries

>
> > ddb{1}> tr
> > ddb{1}> savectx() at savectx+0xae
> > end of kernel
> > end trace frame: 0x7ae419eb6220, count: -1
> >
> > ddb{1}> ps
> > ddb{1}>    PID     TID   PPID    UID  S       FLAGS  WAIT          COMMAND
> >  56384   64771      1      0  3    0x100083  ttyin         getty
> >  72686   27501      1      0  3    0x100083  ttyin         getty
> >  36459  175760      1      0  3    0x100083  ttyin         getty
> >  81450   57092      1      0  3    0x100083  ttyin         getty
> >  96634  204314      1      0  3    0x100083  ttyin         ksh
> >  65510  363957      1      0  3    0x100098  kqread        cron
> >  88845  514315      1  10000  3        0x80  kqread        python3.12
> >  97326  174375      1  10000  3        0x80  kqread        python3.12
> >  22253   67759      1  10000  3        0x80  kqread        python3.12
> >  35444   44336      1  10000  3        0x90  kqread        python3.12
> >  29275   40912      1  10000  3        0x90  kqread        python3.12
> >  35724  333760      1  10000  3        0x80  kqread        python3.12
> >  50769  288439      1  10000  3        0x90  kqread        python3.12
> >  82066  241649      1  10000  3        0x10  netlock       python3.12
> >  53406  363520      1  10000  3        0x80  kqread        python3.12
> >  74524  458639      1  10000  3        0x90  kqread        python3.12
> >  74524  199419      1  10000  3   0x4000090  fsleep        python3.12
> >  74524   83014      1  10000  3   0x4000090  fsleep        python3.12
> > *26047  417104      1     76  7   0x1000010                p0f3
> >    763  241728      1    760  3        0x90  kqread        snmpd
> >  57219  438582   3052     95  3   0x1100092  kqread        smtpd
> >  38463  126657   3052    103  3   0x1100092  kqread        smtpd
> >  64172  119575   3052     95  3   0x1100092  kqread        smtpd
> >  51265  104678   3052     95  3    0x100092  kqread        smtpd
> >  43007  287543   3052     95  3   0x1100092  kqread        smtpd
> >  40890  233284   3052     95  3   0x1100092  kqread        smtpd
> >   3052  172226      1      0  3    0x100080  kqread        smtpd
> >  26543   98368      1      0  3        0x88  kqread        sshd
> >  79246   67776      0      0  3     0x14200  acct          acct
> >   4238   13076      1      0  3    0x100080  kqread        ntpd
> >  68994  428671  87754     83  3    0x100092  kqread        ntpd
> >  87754  161036      1     83  3   0x1100092  kqread        ntpd
> >  30143  296065      1     53  3   0x1000090  kqread        unbound
> >   6513  123759  16181     74  3   0x1100092  bpf           pflogd
> >  16181   56823      1      0  3        0x80  sbwait        pflogd
> >  77125   81173  61465     73  3   0x1100090  kqread        syslogd
> >  61465  258272      1      0  3    0x100082  sbwait        syslogd
> >  88652  122618      0      0  3     0x14200  bored         smr
> >  17119  301982      0      0  3     0x14200  pgzero        zerothread
> >  75306  268311      0      0  3     0x14200  aiodoned      aiodoned
> >  89902  288787      0      0  3     0x14200  syncer        update
> >  77149   53678      0      0  3     0x14200  cleaner       cleaner
> >  74601  396045      0      0  3     0x14200  reaper        reaper
> >  14050  464621      0      0  3     0x14200  pgdaemon      pagedaemon
> >  59034  421709      0      0  3     0x14200  bored         wsdisplay0
> >  42208  103791      0      0  3     0x14200  usbtsk        usbtask
> >   3252  461912      0      0  3     0x14200  usbatsk       usbatsk
> >  58242  495231      0      0  3  0x40014200  acpi0         acpi0
> >  57561  206381      0      0  3  0x40014200                idle1
> >  61023  369667      0      0  3     0x14200  bored         softnet1
> >  34197  326659      0      0  3     0x14200  netlock       softnet0
> >  52256  164467      0      0  3     0x14200  bored         systqmp
> >  71398   21045      0      0  7     0x14200                systq
> >   6890  354256      0      0  3     0x14200  tmoslp        softclockmp
> >  60049  384374      0      0  3  0x40014200  tmoslp        softclock
> >  74022  123588      0      0  3  0x40014200                idle0
> >      1  198910      0      0  3        0x82  wait          init
> >      0       0     -1      0  3     0x10200  scheduler     swapper
> >
> > ddb{1}> show reg
> > rdi               0xffffffff829f04f8    kprintf_mutex
> > rsi                              0x5
> > rbp               0xffff80002ddbdc10
> > rbx                                0
> > rdx                                0
> > rcx                           0x1900    __ALIGN_SIZE+0x900
> > rax                             0x3c
> > r8                           0x70000    acpi_pdirpa+0x5be71
> > r9                0xffff80002dc3b000
> > r10                                0
> > r11                0x986d6894b8c166b
> > r12                                0
> > r13                                0
> > r14               0xffff80002dd302b8
> > r15                                0
> > rip               0xffffffff823723ee    savectx+0xae
> > cs                               0x8
> > rflags                          0x46
> > rsp               0xffff80002ddbdb90
> > ss                              0x10
> > savectx+0xae:   movl    $0,%gs:0x688
> >
> > ddb{1}> show malloc
> > ddb{1}>            Type InUse  MemUse  HighUse   Limit  Requests Type Lim
> >          devbuf  2194   5224K    5289K 186616K      8223        0
> >             pcb    17   8208K   12304K 186616K        45        0
> >          rtable  2173     57K      60K 186616K     52874        0
> >              pf    20     39K      55K 186616K      1586        0
> >          ifaddr   400     97K      97K 186616K       400        0
> >         ifgroup    27      1K       1K 186616K        30        0
> >          sysctl     4      1K       9K 186616K        10        0
> >        counters    54     35K      35K 186616K        54        0
> >        ioctlops     0      0K       4K 186616K     39816        0
> >           mount     6      6K       6K 186616K         6        0
> >          vnodes  1263     79K      79K 186616K      1339        0
> >       UFS quota     1     32K      32K 186616K         1        0
> >       UFS mount    25     65K      65K 186616K        25        0
> >             shm     2      1K       1K 186616K         2        0
> >          VM map     2      1K       1K 186616K         2        0
> >             sem     2      0K       0K 186616K         2        0
> >         dirhash   351     68K      68K 186616K       381        0
> >            ACPI  3761    457K     633K 186616K     18596        0
> >       file desc    12     20K      21K 186616K        34        0
> >            proc    96     76K      93K 186616K      4935        0
> >     NFS srvsock     1      0K       0K 186616K         1        0
> >      NFS daemon     1     16K      16K 186616K         1        0
> >        in_multi   531     29K      29K 186616K       531        0
> >     ether_multi   130      8K       8K 186616K       130        0
> >     ISOFS mount     1     32K      32K 186616K         1        0
> >   MSDOSFS mount     1     16K      16K 186616K         1        0
> >            ttys    37     97K      97K 186616K        37        0
> >            exec     0      0K       1K 186616K     39087        0
> >    fusefs mount     1     32K      32K 186616K         1        0
> >             tdb     3      0K       0K 186616K         3        0
> >         VM swap     8    582K     584K 186616K        10        0
> >        UVM amap  4623    584K    1015K 186616K    346048        0
> >        UVM aobj     3      2K       2K 186616K         3        0
> >      pinsyscall    68    136K     210K 186616K    108216        0
> >             USB    21     15K      15K 186616K        25        0
> >      USB device     8      0K       0K 186616K         8        0
> >          USB HC     1      0K       0K 186616K         1        0
> >         memdesc     1      4K       4K 186616K         1        0
> >     crypto data     1      1K       1K 186616K         1        0
> >     ip6_options     1      0K       3K 186616K     21040        0
> >             NDP     5      0K      16K 186616K       134        0
> >            temp    10   8622K    8751K 186616K   9681105        0
> >          kqueue    37     70K      80K 186616K      1586        0
> >       SYN cache     2     16K      16K 186616K         2        0
> >
> > On Mon, Jan 19, 2026 at 4:23 PM K R <[email protected]> wrote:
> > >
> > > >Synopsis:      panic: malloc: out of space in kmem_map
> > > >Category:      kernel amd64
> > > >Environment:
> > >         System      : OpenBSD 7.8
> > >         Details     : OpenBSD 7.8 (GENERIC.MP) #1: Sat Nov 29 11:02:59 
> > > MST 2025
> > >
> > > [email protected]:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> > >
> > >         Architecture: OpenBSD.amd64
> > >         Machine     : amd64
> > > >Description:
> > >
> > > The machine is running 7.8 + syspatches under VMware:
> > >
> > > hw.model=Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz
> > > hw.vendor=VMware, Inc.
> > > hw.product=VMware20,1
> > > hw.physmem=4277600256
> > > hw.ncpufound=2
> > > hw.ncpuonline=2
> > >
> > > and panics with a out of space in kmem_map message.  Panic, trace and
> > > ps shown below.
> > >
> > > I wish I could show malloc, but the machine is in a remote location
> > > and these are the only ddb commands I got before the operator decided
> > > to reboot.
> > >
> > > panic: malloc: out of space in kmem_map
> > > Stopped at      db_enter+0x14:  popq    %rbp
> > >     TID    PID    UID     PRFLAGS     PFLAGS  CPU  COMMAND
> > > *327273  39043      0     0x14000      0x200    0  systq
> > >
> > > db_enter() at db_enter+0x14
> > > panic(ffffffff82573eac) at panic+0xd5
> > > malloc(2a39,2,9) at malloc+0x823
> > > vmt_nicinfo_task(ffff8000000f8800) at vmt_nicinfo_task+0xec
> > > taskq_thread(ffffffff82a19e10) at taskq_thread+0x129
> > > end trace frame: 0x0, count: -5
> > >
> > > PID     TID   PPID    UID  S       FLAGS  WAIT          COMMAND
> > >  34434  429242      1      0  3    0x100083  ttyin         getty
> > >  45351  273621      1      0  3    0x100083  ttyin         getty
> > >  15766   13242      1      0  3    0x100083  ttyin         getty
> > >  22501  485732      1      0  3    0x100083  ttyin         getty
> > >  21121   14373      1      0  3    0x100083  ttyin         getty
> > >  80812  223396      1      0  3    0x100098  kqread        cron
> > >  38632  393850      1  10000  3        0x80  kqread        python3.12
> > >  50241  286369      1  10000  3        0x80  kqread        python3.12
> > >  47425  216199      1  10000  3        0x80  kqread        python3.12
> > >  15348  391586      1  10000  3        0x90  kqread        python3.12
> > >  83699  242757      1  10000  3        0x90  kqread        python3.12
> > >  85859  155143      1  10000  3        0x80  kqread        python3.12
> > >    140   96058      1  10000  3        0x90  kqread        python3.12
> > >  16478  159685      1  10000  3        0x90  kqread        python3.12
> > >  83476  226912      1  10000  3        0x80  kqread        python3.12
> > >  90068  368113      1  10000  3        0x90  kqread        python3.12
> > >  48780   36449      1     76  3   0x1000090  kqread        p0f3
> > >  41298  290255      1    760  3        0x90  kqread        snmpd
> > >  47065  410042  45934     95  3   0x1100092  kqread        smtpd
> > >  69131  288318  45934    103  3   0x1100092  kqread        smtpd
> > >  16340   95197  45934     95  3   0x1100092  kqread        smtpd
> > >  93858  467609  45934     95  3    0x100092  kqread        smtpd
> > >  77301  381360  45934     95  3   0x1100092  kqread        smtpd
> > >  21497  499144  45934     95  3   0x1100092  kqread        smtpd
> > >  45934  163643      1      0  3    0x100080  kqread        smtpd
> > >  16761  447799      1      0  3        0x88  kqread        sshd
> > >  57214  310491      0      0  3     0x14200  acct          acct
> > >  56721  278490      1      0  3    0x100080  kqread        ntpd
> > >  57480  393701   1368     83  3    0x100092  kqread        ntpd
> > >   1368  281100      1     83  3   0x1100092  kqread        ntpd
> > >  24741  184818      1     53  3   0x1000090  kqread        unbound
> > >  74565  391331  50900     74  3   0x1100092  bpf           pflogd
> > >  50900   22496      1      0  3        0x80  sbwait        pflogd
> > >  65059  173120   1614     73  3   0x1100090  kqread        syslogd
> > >   1614  223274      1      0  3    0x100082  sbwait        syslogd
> > >  12330  136338      0      0  3     0x14200  bored         smr
> > >  60396   73572      0      0  3     0x14200  pgzero        zerothread
> > >  46408  208812      0      0  3     0x14200  aiodoned      aiodoned
> > >  44729  344674      0      0  3     0x14200  syncer        update
> > >  61833  363291      0      0  3     0x14200  cleaner       cleaner
> > >  52556  361252      0      0  3     0x14200  reaper        reaper
> > >  64026  456140      0      0  3     0x14200  pgdaemon      pagedaemon
> > >  75515  242523      0      0  3     0x14200  bored         wsdisplay0
> > >  14784  395040      0      0  3     0x14200  usbtsk        usbtask
> > >  78465  209741      0      0  3     0x14200  usbatsk       usbatsk
> > >  70654  374635      0      0  3  0x40014200  acpi0         acpi0
> > >  48248   77950      0      0  7  0x40014200                idle1
> > >  21581   78258      0      0  3     0x14200  bored         softnet1
> > >  42528  246111      0      0  3     0x14200  netlock       softnet0
> > >  84149  341522      0      0  3     0x14200  bored         systqmp
> > > *39043  327273      0      0  7     0x14200                systq
> > >  50129  384305      0      0  3     0x14200  netlock       softclockmp
> > >  86142  318003      0      0  3  0x40014200  tmoslp        softclock
> > >  95618  290560      0      0  3  0x40014200                idle0
> > >      1  184077      0      0  3        0x82  wait          init
> > >      0       0     -1      0  3     0x10200  scheduler     swapper
> > >
> > > >How-To-Repeat:
> > >
> > > It seems to be related to VMWare when the machine is under
> > > medium/heavy network traffic.  Other baremetal machines with similar
> > > daemons/traffic work just fine.
> > >
> > > Any command (vmstat, systat, etc), while the machine is alive, that
> > > could help?
> > >
> > > Thanks,
> > > --Kor
> > >
> > > >Fix:
> > >
> > > Unknown.
> >
>
> --
> :wq Claudio

Reply via email to