On Thu, Feb 12, 2026 at 10:01 AM Claudio Jeker <[email protected]> wrote:
>
> On Thu, Feb 12, 2026 at 09:38:32AM -0300, K R wrote:
> > On Wed, Feb 11, 2026 at 10:50 AM Claudio Jeker <[email protected]>
> > wrote:
> > >
> > > On Wed, Feb 11, 2026 at 10:24:20AM -0300, K R wrote:
> > > > Same panic, this time with show malloc included. Please let me know
> > > > if you need additional ddb commands next time.
> > > >
> > > > Thanks,
> > > > --Kor
> > > >
> > > > ddb{1}> show panic
> > > > ddb{1}> *cpu0: malloc: out of space in kmem_map
> > >
> > >
> > > Something is using all memory in kmem_map and then the system goes boom.
> > > It is not malloc, the show malloc does not show any bucket that consumes a
> > > lot of mem.
> > >
> > > show all pools is another place memory may hide. Since multi page pools
> >
> > Thanks, I'll submit show all pools (and perhaps show uvmexp?) -- the
> > machine paniced again, waiting for the ddb commands to be run by the
> > remote admin.
> >
> > > use the kmem_map as well.
> > >
> > > You can actually watch this during runtime and look if something is slowly
> > > growing upwards. vmstat -m is great for that.
> >
> > It may be unrelated, but it caught my eye that the only pool with Fail
> > request if this:
> >
> > Name Size Requests Fail InUse Pgreq Pgrel Npage Hiwat Minpg Maxpg
> > Idle
> > pfstate 384 27749088 7613228 7622 391809 390986 823 10001 0 8
> > 8
>
> This is probably unrelated. The pfstate table has normally a limit in
> place and afaik request fail because you hit the limit. By default this is
> 100'000 and you seem to hit that.
> Check the pfctl -si and -sm outputs to see if this matches up.
You're right -- the states limit is set to 100000. Right now, after
the reboot, pfctl -si show around 7k states.
>
> You want to look at the Npage line which tells you how many pages this
> pool is currently using.
>
> > dmesg was showing:
> > uvm_mapent_alloc: out of static map entries
>
> That is kind of ok. On startup part of the kmem_map is preallocated and
> this warning triggers when more map entries are needed. It is an
> indication that your system needs lots of kernel memory but not why.
Another ddb session, this time with show all pools included.
ddb{1}> show panic
panic: malloc: out of space in kmem_map
Stopped at db_enter+0x14: popq %rbp
TID PID UID PRFLAGS PFLAGS CPU COMMAND
256720 41745 76 0x1000010 0 0 p0f3
*475540 93944 0 0x14000 0x200 1 systq
ddb{1}> tr
db_enter() at db_enter+0x14
panic(ffffffff8257309f) at panic+0xd5
malloc(2a39,2,9) at malloc+0x823
vmt_nicinfo_task(ffff8000000f8800) at vmt_nicinfo_task+0xec
taskq_thread(ffffffff82a35098) at taskq_thread+0x129
end trace frame: 0x0, count: -5
ddb{1}> ps
PID TID PPID UID S FLAGS WAIT COMMAND
32361 114306 1 0 3 0x100083 ttyin getty
76746 121801 1 0 3 0x100083 ttyin getty
20725 29068 1 0 3 0x100083 ttyin getty
35993 381407 1 0 3 0x100083 ttyin getty
60503 504653 1 0 3 0x100083 ttyin getty
11449 119573 1 0 3 0x100098 kqread cron
771 19141 1 10000 3 0x80 kqread python3.12
63138 115999 1 10000 3 0x80 kqread python3.12
8530 420008 1 10000 3 0x80 kqread python3.12
66123 475850 1 10000 3 0x90 kqread python3.12
41619 255613 1 10000 3 0x90 kqread python3.12
43238 427572 1 10000 3 0x80 kqread python3.12
16158 185473 1 10000 3 0x90 kqread python3.12
16812 59253 1 10000 3 0x10 netlock python3.12
97652 241813 1 10000 3 0x80 kqread python3.12
59782 48953 1 10000 3 0x90 kqread python3.12
59782 83884 1 10000 3 0x4000090 fsleep python3.12
59782 3069 1 10000 3 0x4000090 fsleep python3.12
59782 142848 1 10000 3 0x4000090 fsleep python3.12
41745 256720 1 76 7 0x1000010 p0f3
25681 271028 1 760 3 0x90 kqread snmpd
73034 112955 18899 95 3 0x1100092 kqread smtpd
75230 185787 18899 103 3 0x1100092 kqread smtpd
47800 13881 18899 95 3 0x1100092 kqread smtpd
98162 484914 18899 95 3 0x100092 kqread smtpd
75291 508020 18899 95 3 0x1100092 kqread smtpd
97973 49842 18899 95 3 0x1100092 kqread smtpd
18899 71726 1 0 3 0x100080 kqread smtpd
8170 297205 1 0 3 0x88 kqread sshd
85911 274071 0 0 3 0x14200 acct acct
51344 522570 1 0 3 0x100080 kqread ntpd
5525 352873 96960 83 3 0x100092 kqread ntpd
96960 493623 1 83 3 0x1100092 kqread ntpd
27424 23532 1 53 3 0x1000090 kqread unbound
71898 307724 45282 74 3 0x1100092 bpf pflogd
45282 184378 1 0 3 0x80 sbwait pflogd
40784 65875 59852 73 3 0x1100090 kqread syslogd
59852 41237 1 0 3 0x100082 sbwait syslogd
37419 189156 0 0 3 0x14200 bored smr
78444 303084 0 0 3 0x14200 pgzero zerothread
66703 469361 0 0 3 0x14200 aiodoned aiodoned
79667 504010 0 0 3 0x14200 syncer update
78924 419519 0 0 3 0x14200 cleaner cleaner
60716 422039 0 0 3 0x14200 reaper reaper
51516 55743 0 0 3 0x14200 pgdaemon pagedaemon
19162 493331 0 0 3 0x14200 bored wsdisplay0
52282 204815 0 0 3 0x14200 usbtsk usbtask
15833 187543 0 0 3 0x14200 usbatsk usbatsk
91249 481825 0 0 3 0x40014200 acpi0 acpi0
74029 34155 0 0 3 0x40014200 idle1
51975 485714 0 0 3 0x14200 bored softnet1
86056 208742 0 0 2 0x14200 softnet0
52214 148534 0 0 3 0x14200 bored systqmp
*93944 475540 0 0 7 0x14200 systq
59299 362359 0 0 3 0x14200 tmoslp softclockmp
11202 373347 0 0 3 0x40014200 tmoslp softclock
13311 63227 0 0 3 0x40014200 idle0
1 271149 0 0 3 0x82 wait init
0 0 -1 0 3 0x10200 scheduler swapper
ddb{1}> show reg
rdi 0x4
rsi 0x14
rbp 0xffff80002dcb5c00
rbx 0x9
rdx 0
rcx 0x1900 __ALIGN_SIZE+0x900
rax 0x28
r8 0x70000 acpi_pdirpa+0x5be71
r9 0xffff80002dc3b000
r10 0
r11 0xc654fa3f353adee
r12 0
r13 0x2a39 __ALIGN_SIZE+0x1a39
r14 0
r15 0xffff80002dc48bb8
rip 0xffffffff816ca494 db_enter+0x14
cs 0x8
rflags 0x206
rsp 0xffff80002dcb5c00
ss 0x10
db_enter+0x14: popq %rbp
ddb{1}> show malloc
Type InUse MemUse HighUse Limit Requests Type Lim
devbuf 2195 5225K 5289K 186616K 13945 0
pcb 17 8208K 12304K 186616K 45 0
rtable 2163 57K 60K 186616K 133448 0
pf 20 39K 55K 186616K 3817 0
ifaddr 400 97K 97K 186616K 400 0
ifgroup 27 1K 1K 186616K 30 0
sysctl 4 1K 9K 186616K 10 0
counters 54 35K 35K 186616K 54 0
ioctlops 0 0K 4K 186616K 94809 0
mount 6 6K 6K 186616K 6 0
vnodes 1263 79K 79K 186616K 1407 0
UFS quota 1 32K 32K 186616K 1 0
UFS mount 25 65K 65K 186616K 25 0
shm 2 1K 1K 186616K 2 0
VM map 2 1K 1K 186616K 2 0
sem 2 0K 0K 186616K 2 0
dirhash 351 68K 68K 186616K 381 0
ACPI 3761 457K 633K 186616K 18596 0
file desc 13 25K 29K 186616K 36 0
proc 106 85K 101K 186616K 7390 0
NFS srvsock 1 0K 0K 186616K 1 0
NFS daemon 1 16K 16K 186616K 1 0
in_multi 531 29K 29K 186616K 531 0
ether_multi 130 8K 8K 186616K 130 0
ISOFS mount 1 32K 32K 186616K 1 0
MSDOSFS mount 1 16K 16K 186616K 1 0
ttys 43 123K 123K 186616K 43 0
exec 0 0K 1K 186616K 92977 0
fusefs mount 1 32K 32K 186616K 1 0
tdb 3 0K 0K 186616K 3 0
VM swap 8 582K 584K 186616K 10 0
UVM amap 4432 546K 1128K 186616K 836765 0
UVM aobj 3 2K 2K 186616K 3 0
pinsyscall 69 138K 220K 186616K 257360 0
USB 21 15K 15K 186616K 25 0
USB device 8 0K 0K 186616K 8 0
USB HC 1 0K 0K 186616K 1 0
memdesc 1 4K 4K 186616K 1 0
crypto data 1 1K 1K 186616K 1 0
ip6_options 1 0K 3K 186616K 64430 0
NDP 5 0K 16K 186616K 134 0
temp 10 8623K 8750K 186616K 26000352 0
kqueue 38 72K 82K 186616K 3881 0
SYN cache 2 16K 16K 186616K 2 0
ddb{1}> show all pools
Name Size Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg Maxpg Idle
plcache 128 26 0 0 1 0 1 1 0 8 0
rtpcb 120 15 0 15 1 1 0 1 0 8 0
rtentry 136 13640 0 12589 61 24 37 38 0 8 0
unpcb 144 1729525 0 1729474 157 154 3 9 0 8 1
syncache 336 6581574 0 6581522 5756 5749 7 538 0 8 0
sackhl 24 726 0 726 514 514 0 1 0 8 0
tcpqe 32 2372973 0 2372951 29 28 1 1 0 8 0
tcpcb 736 4309640 103 4091353 20236 382 19854 19854 0 8 0
arp 96 254 0 0 7 0 7 7 0 8 0
inpcb 328 5892193 0 5673850 18519 314 18205 18205 0 8 0
nd6 112 133 0 1 4 0 4 4 0 8 0
pfosfp 40 28560 0 28137 5 0 5 5 0 8 0
pfosfpen 112 28560 0 27846 21 0 21 21 0 8 0
pfrke_plain 168 528267 0 514337 21910 21266 644 1195 0 8 0
pfrktable 1344 208 0 199 2 0 2 2 0 8 0
hfscintsc 48 273 0 266 1 0 1 1 0 8 0
hfscclass 592 195 0 190 1 0 1 1 0 8 0
pfanchor 1288 1 0 0 1 0 1 1 0 8 0
pftag 88 5 0 0 1 0 1 1 0 8 0
pfqueue 320 156 0 152 1 0 1 1 0 8 0
pfruleitem 16 1434599 0 1434598 1 0 1 1 0 8 0
pfstitem 24 16598784 0 16587337 9990 9907 83 603 0 8 0
pfstkey 128 16598784 0 16587336 58760 58322 438 3226 0 8 0
pfstate 384 16598777 5933960 16587284 239196 237883 1313 10001 0 8 0
pfrule 1344 3526 0 3423 60 44 16 17 0 8 0
rttmr 136 12597 0 12589 26 25 1 2 0 8 0
art_heap8 4096 1 0 0 1 0 1 1 0 8 0
art_heap4 256 72464 0 71916 605 570 35 40 0 8 0
art_table 40 72465 0 71916 14 8 6 7 0 8 0
art_node 32 13640 0 13226 6 0 6 6 0 8 0
dirhash 1024 589 0 38 69 0 69 69 0 8 0
dino2pl 256 38144 0 3612 2230 71 2159 2159 0 8 0
ffsino 256 38144 0 3612 2230 71 2159 2159 0 8 0
nchpl 144 57342 0 52912 181 16 165 165 0 8 0
rtmask 32 897 0 874 1 0 1 1 0 8 0
vnodes 216 37682 0 0 2094 0 2094 2094 0 8 0
namei 1024 1853012 0 1853012 27 26 1 2 0 8 1
percpumem 16 42 0 0 1 0 1 1 0 8 0
xhcixfer 280 52 0 51 1 0 1 1 0 8 0
pfiaddrpl 120 234 0 216 1 0 1 1 0 8 0
kstatmem 264 13 0 0 1 0 1 1 0 8 0
scxspl 216 1121854 0 1121854 213 212 1 2 1 8 1
plimitpl 152 1375 0 1345 2 0 2 2 0 8 0
sigapl 424 86146 0 86091 48 40 8 10 0 8 0
knotepl 120 124767 0 0 163 69 94 96 0 8 0
kqueuepl 184 4040 0 4006 2 0 2 2 0 8 0
pipepl 304 50506 0 50475 158 155 3 4 0 8 0
fdescpl 448 86127 0 86091 210 205 5 7 0 8 0
filepl 120 10387353 0 10386295 62 26 36 38 0 8 0
lockfpl 104 8 0 8 3 3 0 1 0 8 0
lockfspl 48 4 0 4 3 3 0 1 0 8 0
sessionpl 144 1577 0 1550 2 0 2 2 0 8 0
pgrppl 48 1595 0 1568 1 0 1 1 0 8 0
ucredpl 104 14861 0 14822 2 0 2 2 0 8 0
zombiepl 144 86091 0 86091 701 701 0 1 0 8 0
processpl 1152 86146 0 86091 46 41 5 6 0 8 0
procpl 664 86268 0 86210 48 42 6 7 0 8 0
sockpl 552 7621733 0 7403339 15972 364 15608 15608 0 8 0
mcl64k 65536 410 0 0 9 7 2 5 0 8 0
mcl16k 16384 297 0 0 8 4 4 5 0 8 0
mcl12k 12288 100 0 0 4 1 3 3 0 8 0
mcl9k 9216 76 0 0 2 0 2 2 0 8 0
mcl8k 8192 298 0 0 7 3 4 5 0 8 0
mcl4k 4096 7663 0 0 132 129 3 8 0 8 0
mcl2k2 2112 16431 0 0 108 88 20 45 0 8 0
mcl2k 2048 35400 0 0 686 680 6 49 0 8 0
mtagpl 96 2409 0 0 1 0 1 1 0 8 0
mbufpl 256 286232 0 0 13640 5 13635 13635 0 8 0
bufpl 280 1101974 0 1085709 3041 1669 1372 2833 0 8 0
anonpl 32 283271 0 0 1919 0 1919 1919 0 508 0
amapchunkpl 152 1966777 0 1956223 1411 923 488 544 0 158 0
amappl16 200 64035 0 63870 575 566 9 186 0 8 0
amappl15 192 1868 0 1830 71 69 2 3 0 8 0
amappl14 184 1846 0 1798 32 29 3 3 0 8 0
amappl13 176 96536 0 96476 234 231 3 5 0 8 0
amappl12 168 194245 0 194129 20 13 7 8 0 8 0
amappl11 160 2270 0 2210 4 1 3 3 0 8 0
amappl10 152 4622 0 4524 45 41 4 5 0 8 0
amappl9 144 122166 0 122052 46 41 5 5 0 8 0
amappl8 136 9057 0 8951 38 34 4 37 0 8 0
amappl7 128 3817 0 3747 4 1 3 4 0 8 0
amappl6 120 62846 0 62693 75 70 5 6 0 8 0
amappl5 112 7853 0 7597 31 23 8 10 0 8 0
amappl4 104 118285 0 117843 61 48 13 58 0 8 0
amappl3 96 266095 0 265699 115 102 13 16 0 8 0
amappl2 88 174383 0 172933 172 138 34 137 0 8 0
amappl1 80 2049988 0 2046881 211 140 71 72 0 8 0
amappl 88 561551 0 558733 230 131 99 105 0 92 0
uvmvnodes 80 37682 0 0 770 0 770 770 0 8 0
dma4096 4096 1 0 1 1 1 0 1 0 8 0
dma512 512 32 0 30 4 3 1 4 0 8 0
dma256 256 10 0 10 1 1 0 1 0 8 0
dma128 128 257 0 257 1 1 0 1 0 8 0
dma64 64 11 0 11 1 1 0 1 0 8 0
dma32 32 9 0 9 1 1 0 1 0 8 0
dma16 16 29 0 29 1 1 0 1 0 8 0
aobjpl 72 2 0 0 1 0 1 1 0 8 0
uaddrrnd 24 86133 0 86097 1 0 1 1 0 8 0
uaddrbest 32 2 0 0 1 0 1 1 0 8 0
uaddr 24 86133 0 86097 1 0 1 1 0 8 0
vmmpekpl 168 3308103 0 3308038 8 3 5 5 0 8 0
vmmpepl 168 10701189 0 10686024 4021 3244 777 1135 0 357 0
vmsppl 368 86132 0 86097 181 177 4 6 0 8 0
rwobjpl 40 2695195 0 2648473 680 189 491 494 0 8 0
pdppl 4096 86133 0 86097 4788 4752 36 62 0 8 0
pvpl 32 367300 0 155 2961 0 2961 2961 0 265 0
pmappl 216 86133 0 86097 42 39 3 4 0 8 0
extentpl 40 89 0 54 1 0 1 1 0 8 0
phpool 112 50821 0 12836 1157 71 1086 1086 0 8 0
ddb{1}> show uvmexp
Current UVM status:
pagesize=4096 (0x1000), pagemask=0xfff, pageshift=12
1008304 VM pages: 149436 active, 178226 inactive, 1 wired, 409769
free (74045 zero)
freemin=33610, free-target=44813, inactive-target=0, wired-max=336101
faults=30023206, traps=30763243, intrs=69135355, ctxswitch=158096894
fpuswitch=0
softint=17400395, syscalls=533007242, kmapent=19
fault counts:
noram=0, noanon=0, noamap=0, pgwait=0, pgrele=0
relocks=340800(4587), upgrades=0(0) anget(retries)=14458882(0),
amapcopy=4422224
neighbor anon/obj pg=7046336/15531222, gets(lock/unlock)=5332400/345408
cases: anon=11835641, anoncow=2623241, obj=4147681,
prcopy=1180111, przero=10236520
daemon and swap counts:
woke=0, revs=0, scans=0, obscans=0, anscans=0
busy=0, freed=0, reactivate=0, deactivate=0
pageouts=0, pending=0, nswget=0
nswapdev=1
swpages=1050248, swpginuse=0, swpgonly=0 paging=0
kernel pointers:
objs(kern)=0xffffffff82b8f3f0
>
> > >
> > > > ddb{1}> tr
> > > > ddb{1}> savectx() at savectx+0xae
> > > > end of kernel
> > > > end trace frame: 0x7ae419eb6220, count: -1
> > > >
> > > > ddb{1}> ps
> > > > ddb{1}> PID TID PPID UID S FLAGS WAIT
> > > > COMMAND
> > > > 56384 64771 1 0 3 0x100083 ttyin getty
> > > > 72686 27501 1 0 3 0x100083 ttyin getty
> > > > 36459 175760 1 0 3 0x100083 ttyin getty
> > > > 81450 57092 1 0 3 0x100083 ttyin getty
> > > > 96634 204314 1 0 3 0x100083 ttyin ksh
> > > > 65510 363957 1 0 3 0x100098 kqread cron
> > > > 88845 514315 1 10000 3 0x80 kqread python3.12
> > > > 97326 174375 1 10000 3 0x80 kqread python3.12
> > > > 22253 67759 1 10000 3 0x80 kqread python3.12
> > > > 35444 44336 1 10000 3 0x90 kqread python3.12
> > > > 29275 40912 1 10000 3 0x90 kqread python3.12
> > > > 35724 333760 1 10000 3 0x80 kqread python3.12
> > > > 50769 288439 1 10000 3 0x90 kqread python3.12
> > > > 82066 241649 1 10000 3 0x10 netlock python3.12
> > > > 53406 363520 1 10000 3 0x80 kqread python3.12
> > > > 74524 458639 1 10000 3 0x90 kqread python3.12
> > > > 74524 199419 1 10000 3 0x4000090 fsleep python3.12
> > > > 74524 83014 1 10000 3 0x4000090 fsleep python3.12
> > > > *26047 417104 1 76 7 0x1000010 p0f3
> > > > 763 241728 1 760 3 0x90 kqread snmpd
> > > > 57219 438582 3052 95 3 0x1100092 kqread smtpd
> > > > 38463 126657 3052 103 3 0x1100092 kqread smtpd
> > > > 64172 119575 3052 95 3 0x1100092 kqread smtpd
> > > > 51265 104678 3052 95 3 0x100092 kqread smtpd
> > > > 43007 287543 3052 95 3 0x1100092 kqread smtpd
> > > > 40890 233284 3052 95 3 0x1100092 kqread smtpd
> > > > 3052 172226 1 0 3 0x100080 kqread smtpd
> > > > 26543 98368 1 0 3 0x88 kqread sshd
> > > > 79246 67776 0 0 3 0x14200 acct acct
> > > > 4238 13076 1 0 3 0x100080 kqread ntpd
> > > > 68994 428671 87754 83 3 0x100092 kqread ntpd
> > > > 87754 161036 1 83 3 0x1100092 kqread ntpd
> > > > 30143 296065 1 53 3 0x1000090 kqread unbound
> > > > 6513 123759 16181 74 3 0x1100092 bpf pflogd
> > > > 16181 56823 1 0 3 0x80 sbwait pflogd
> > > > 77125 81173 61465 73 3 0x1100090 kqread syslogd
> > > > 61465 258272 1 0 3 0x100082 sbwait syslogd
> > > > 88652 122618 0 0 3 0x14200 bored smr
> > > > 17119 301982 0 0 3 0x14200 pgzero zerothread
> > > > 75306 268311 0 0 3 0x14200 aiodoned aiodoned
> > > > 89902 288787 0 0 3 0x14200 syncer update
> > > > 77149 53678 0 0 3 0x14200 cleaner cleaner
> > > > 74601 396045 0 0 3 0x14200 reaper reaper
> > > > 14050 464621 0 0 3 0x14200 pgdaemon pagedaemon
> > > > 59034 421709 0 0 3 0x14200 bored wsdisplay0
> > > > 42208 103791 0 0 3 0x14200 usbtsk usbtask
> > > > 3252 461912 0 0 3 0x14200 usbatsk usbatsk
> > > > 58242 495231 0 0 3 0x40014200 acpi0 acpi0
> > > > 57561 206381 0 0 3 0x40014200 idle1
> > > > 61023 369667 0 0 3 0x14200 bored softnet1
> > > > 34197 326659 0 0 3 0x14200 netlock softnet0
> > > > 52256 164467 0 0 3 0x14200 bored systqmp
> > > > 71398 21045 0 0 7 0x14200 systq
> > > > 6890 354256 0 0 3 0x14200 tmoslp softclockmp
> > > > 60049 384374 0 0 3 0x40014200 tmoslp softclock
> > > > 74022 123588 0 0 3 0x40014200 idle0
> > > > 1 198910 0 0 3 0x82 wait init
> > > > 0 0 -1 0 3 0x10200 scheduler swapper
> > > >
> > > > ddb{1}> show reg
> > > > rdi 0xffffffff829f04f8 kprintf_mutex
> > > > rsi 0x5
> > > > rbp 0xffff80002ddbdc10
> > > > rbx 0
> > > > rdx 0
> > > > rcx 0x1900 __ALIGN_SIZE+0x900
> > > > rax 0x3c
> > > > r8 0x70000 acpi_pdirpa+0x5be71
> > > > r9 0xffff80002dc3b000
> > > > r10 0
> > > > r11 0x986d6894b8c166b
> > > > r12 0
> > > > r13 0
> > > > r14 0xffff80002dd302b8
> > > > r15 0
> > > > rip 0xffffffff823723ee savectx+0xae
> > > > cs 0x8
> > > > rflags 0x46
> > > > rsp 0xffff80002ddbdb90
> > > > ss 0x10
> > > > savectx+0xae: movl $0,%gs:0x688
> > > >
> > > > ddb{1}> show malloc
> > > > ddb{1}> Type InUse MemUse HighUse Limit Requests Type
> > > > Lim
> > > > devbuf 2194 5224K 5289K 186616K 8223 0
> > > > pcb 17 8208K 12304K 186616K 45 0
> > > > rtable 2173 57K 60K 186616K 52874 0
> > > > pf 20 39K 55K 186616K 1586 0
> > > > ifaddr 400 97K 97K 186616K 400 0
> > > > ifgroup 27 1K 1K 186616K 30 0
> > > > sysctl 4 1K 9K 186616K 10 0
> > > > counters 54 35K 35K 186616K 54 0
> > > > ioctlops 0 0K 4K 186616K 39816 0
> > > > mount 6 6K 6K 186616K 6 0
> > > > vnodes 1263 79K 79K 186616K 1339 0
> > > > UFS quota 1 32K 32K 186616K 1 0
> > > > UFS mount 25 65K 65K 186616K 25 0
> > > > shm 2 1K 1K 186616K 2 0
> > > > VM map 2 1K 1K 186616K 2 0
> > > > sem 2 0K 0K 186616K 2 0
> > > > dirhash 351 68K 68K 186616K 381 0
> > > > ACPI 3761 457K 633K 186616K 18596 0
> > > > file desc 12 20K 21K 186616K 34 0
> > > > proc 96 76K 93K 186616K 4935 0
> > > > NFS srvsock 1 0K 0K 186616K 1 0
> > > > NFS daemon 1 16K 16K 186616K 1 0
> > > > in_multi 531 29K 29K 186616K 531 0
> > > > ether_multi 130 8K 8K 186616K 130 0
> > > > ISOFS mount 1 32K 32K 186616K 1 0
> > > > MSDOSFS mount 1 16K 16K 186616K 1 0
> > > > ttys 37 97K 97K 186616K 37 0
> > > > exec 0 0K 1K 186616K 39087 0
> > > > fusefs mount 1 32K 32K 186616K 1 0
> > > > tdb 3 0K 0K 186616K 3 0
> > > > VM swap 8 582K 584K 186616K 10 0
> > > > UVM amap 4623 584K 1015K 186616K 346048 0
> > > > UVM aobj 3 2K 2K 186616K 3 0
> > > > pinsyscall 68 136K 210K 186616K 108216 0
> > > > USB 21 15K 15K 186616K 25 0
> > > > USB device 8 0K 0K 186616K 8 0
> > > > USB HC 1 0K 0K 186616K 1 0
> > > > memdesc 1 4K 4K 186616K 1 0
> > > > crypto data 1 1K 1K 186616K 1 0
> > > > ip6_options 1 0K 3K 186616K 21040 0
> > > > NDP 5 0K 16K 186616K 134 0
> > > > temp 10 8622K 8751K 186616K 9681105 0
> > > > kqueue 37 70K 80K 186616K 1586 0
> > > > SYN cache 2 16K 16K 186616K 2 0
> > > >
> > > > On Mon, Jan 19, 2026 at 4:23 PM K R <[email protected]> wrote:
> > > > >
> > > > > >Synopsis: panic: malloc: out of space in kmem_map
> > > > > >Category: kernel amd64
> > > > > >Environment:
> > > > > System : OpenBSD 7.8
> > > > > Details : OpenBSD 7.8 (GENERIC.MP) #1: Sat Nov 29
> > > > > 11:02:59 MST 2025
> > > > >
> > > > > [email protected]:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> > > > >
> > > > > Architecture: OpenBSD.amd64
> > > > > Machine : amd64
> > > > > >Description:
> > > > >
> > > > > The machine is running 7.8 + syspatches under VMware:
> > > > >
> > > > > hw.model=Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz
> > > > > hw.vendor=VMware, Inc.
> > > > > hw.product=VMware20,1
> > > > > hw.physmem=4277600256
> > > > > hw.ncpufound=2
> > > > > hw.ncpuonline=2
> > > > >
> > > > > and panics with a out of space in kmem_map message. Panic, trace and
> > > > > ps shown below.
> > > > >
> > > > > I wish I could show malloc, but the machine is in a remote location
> > > > > and these are the only ddb commands I got before the operator decided
> > > > > to reboot.
> > > > >
> > > > > panic: malloc: out of space in kmem_map
> > > > > Stopped at db_enter+0x14: popq %rbp
> > > > > TID PID UID PRFLAGS PFLAGS CPU COMMAND
> > > > > *327273 39043 0 0x14000 0x200 0 systq
> > > > >
> > > > > db_enter() at db_enter+0x14
> > > > > panic(ffffffff82573eac) at panic+0xd5
> > > > > malloc(2a39,2,9) at malloc+0x823
> > > > > vmt_nicinfo_task(ffff8000000f8800) at vmt_nicinfo_task+0xec
> > > > > taskq_thread(ffffffff82a19e10) at taskq_thread+0x129
> > > > > end trace frame: 0x0, count: -5
> > > > >
> > > > > PID TID PPID UID S FLAGS WAIT COMMAND
> > > > > 34434 429242 1 0 3 0x100083 ttyin getty
> > > > > 45351 273621 1 0 3 0x100083 ttyin getty
> > > > > 15766 13242 1 0 3 0x100083 ttyin getty
> > > > > 22501 485732 1 0 3 0x100083 ttyin getty
> > > > > 21121 14373 1 0 3 0x100083 ttyin getty
> > > > > 80812 223396 1 0 3 0x100098 kqread cron
> > > > > 38632 393850 1 10000 3 0x80 kqread python3.12
> > > > > 50241 286369 1 10000 3 0x80 kqread python3.12
> > > > > 47425 216199 1 10000 3 0x80 kqread python3.12
> > > > > 15348 391586 1 10000 3 0x90 kqread python3.12
> > > > > 83699 242757 1 10000 3 0x90 kqread python3.12
> > > > > 85859 155143 1 10000 3 0x80 kqread python3.12
> > > > > 140 96058 1 10000 3 0x90 kqread python3.12
> > > > > 16478 159685 1 10000 3 0x90 kqread python3.12
> > > > > 83476 226912 1 10000 3 0x80 kqread python3.12
> > > > > 90068 368113 1 10000 3 0x90 kqread python3.12
> > > > > 48780 36449 1 76 3 0x1000090 kqread p0f3
> > > > > 41298 290255 1 760 3 0x90 kqread snmpd
> > > > > 47065 410042 45934 95 3 0x1100092 kqread smtpd
> > > > > 69131 288318 45934 103 3 0x1100092 kqread smtpd
> > > > > 16340 95197 45934 95 3 0x1100092 kqread smtpd
> > > > > 93858 467609 45934 95 3 0x100092 kqread smtpd
> > > > > 77301 381360 45934 95 3 0x1100092 kqread smtpd
> > > > > 21497 499144 45934 95 3 0x1100092 kqread smtpd
> > > > > 45934 163643 1 0 3 0x100080 kqread smtpd
> > > > > 16761 447799 1 0 3 0x88 kqread sshd
> > > > > 57214 310491 0 0 3 0x14200 acct acct
> > > > > 56721 278490 1 0 3 0x100080 kqread ntpd
> > > > > 57480 393701 1368 83 3 0x100092 kqread ntpd
> > > > > 1368 281100 1 83 3 0x1100092 kqread ntpd
> > > > > 24741 184818 1 53 3 0x1000090 kqread unbound
> > > > > 74565 391331 50900 74 3 0x1100092 bpf pflogd
> > > > > 50900 22496 1 0 3 0x80 sbwait pflogd
> > > > > 65059 173120 1614 73 3 0x1100090 kqread syslogd
> > > > > 1614 223274 1 0 3 0x100082 sbwait syslogd
> > > > > 12330 136338 0 0 3 0x14200 bored smr
> > > > > 60396 73572 0 0 3 0x14200 pgzero zerothread
> > > > > 46408 208812 0 0 3 0x14200 aiodoned aiodoned
> > > > > 44729 344674 0 0 3 0x14200 syncer update
> > > > > 61833 363291 0 0 3 0x14200 cleaner cleaner
> > > > > 52556 361252 0 0 3 0x14200 reaper reaper
> > > > > 64026 456140 0 0 3 0x14200 pgdaemon pagedaemon
> > > > > 75515 242523 0 0 3 0x14200 bored wsdisplay0
> > > > > 14784 395040 0 0 3 0x14200 usbtsk usbtask
> > > > > 78465 209741 0 0 3 0x14200 usbatsk usbatsk
> > > > > 70654 374635 0 0 3 0x40014200 acpi0 acpi0
> > > > > 48248 77950 0 0 7 0x40014200 idle1
> > > > > 21581 78258 0 0 3 0x14200 bored softnet1
> > > > > 42528 246111 0 0 3 0x14200 netlock softnet0
> > > > > 84149 341522 0 0 3 0x14200 bored systqmp
> > > > > *39043 327273 0 0 7 0x14200 systq
> > > > > 50129 384305 0 0 3 0x14200 netlock softclockmp
> > > > > 86142 318003 0 0 3 0x40014200 tmoslp softclock
> > > > > 95618 290560 0 0 3 0x40014200 idle0
> > > > > 1 184077 0 0 3 0x82 wait init
> > > > > 0 0 -1 0 3 0x10200 scheduler swapper
> > > > >
> > > > > >How-To-Repeat:
> > > > >
> > > > > It seems to be related to VMWare when the machine is under
> > > > > medium/heavy network traffic. Other baremetal machines with similar
> > > > > daemons/traffic work just fine.
> > > > >
> > > > > Any command (vmstat, systat, etc), while the machine is alive, that
> > > > > could help?
> > > > >
> > > > > Thanks,
> > > > > --Kor
> > > > >
> > > > > >Fix:
> > > > >
> > > > > Unknown.
> > > >
> > >
> > > --
> > > :wq Claudio
> >
>
> --
> :wq Claudio