On Thu, Feb 12, 2026 at 04:03:36PM -0300, K R wrote:
> On Thu, Feb 12, 2026 at 10:01 AM Claudio Jeker <[email protected]> 
> wrote:
> >
> > On Thu, Feb 12, 2026 at 09:38:32AM -0300, K R wrote:
> > > On Wed, Feb 11, 2026 at 10:50 AM Claudio Jeker <[email protected]> 
> > > wrote:
> > > >
> > > > On Wed, Feb 11, 2026 at 10:24:20AM -0300, K R wrote:
> > > > > Same panic, this time with show malloc included.  Please let me know
> > > > > if you need additional ddb commands next time.
> > > > >
> > > > > Thanks,
> > > > > --Kor
> > > > >
> > > > > ddb{1}> show panic
> > > > > ddb{1}> *cpu0: malloc: out of space in kmem_map
> > > >
> > > >
> > > > Something is using all memory in kmem_map and then the system goes boom.
> > > > It is not malloc, the show malloc does not show any bucket that 
> > > > consumes a
> > > > lot of mem.
> > > >
> > > > show all pools is another place memory may hide. Since multi page pools
> > >
> > > Thanks, I'll submit show all pools (and perhaps show uvmexp?) -- the
> > > machine paniced again, waiting for the ddb commands to be run by the
> > > remote admin.
> > >
> > > > use the kmem_map as well.
> > > >
> > > > You can actually watch this during runtime and look if something is 
> > > > slowly
> > > > growing upwards. vmstat -m is great for that.
> > >
> > > It may be unrelated, but it caught my eye that the only pool with Fail
> > > request if this:
> > >
> > > Name        Size Requests Fail    InUse Pgreq Pgrel Npage Hiwat Minpg 
> > > Maxpg Idle
> > > pfstate      384 27749088 7613228  7622 391809 390986 823 10001     0     
> > > 8    8
> >
> > This is probably unrelated. The pfstate table has normally a limit in
> > place and afaik request fail because you hit the limit. By default this is
> > 100'000 and you seem to hit that.
> > Check the pfctl -si and -sm outputs to see if this matches up.
> 
> You're right -- the states limit is set to 100000.  Right now, after
> the reboot, pfctl -si show around 7k states.
> 
> >
> > You want to look at the Npage line which tells you how many pages this
> > pool is currently using.
> >
> > > dmesg was showing:
> > > uvm_mapent_alloc: out of static map entries
> >
> > That is kind of ok. On startup part of the kmem_map is preallocated and
> > this warning triggers when more map entries are needed. It is an
> > indication that your system needs lots of kernel memory but not why.
> 
> Another ddb session, this time with show all pools included.
> 
> ddb{1}> show panic
> panic: malloc: out of space in kmem_map
> Stopped at      db_enter+0x14:  popq    %rbp
>     TID    PID    UID     PRFLAGS     PFLAGS  CPU  COMMAND
>  256720  41745     76   0x1000010          0    0  p0f3
> *475540  93944      0     0x14000      0x200    1  systq
> 
> ddb{1}> tr
> db_enter() at db_enter+0x14
> panic(ffffffff8257309f) at panic+0xd5
> malloc(2a39,2,9) at malloc+0x823
> vmt_nicinfo_task(ffff8000000f8800) at vmt_nicinfo_task+0xec
> taskq_thread(ffffffff82a35098) at taskq_thread+0x129
> end trace frame: 0x0, count: -5

What is this box doing and is there actually enough resources for that
task assigned to the VM?
 
> ddb{1}> show all pools
> Name      Size Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg Maxpg Idle
> tcpcb      736  4309640  103  4091353 20236   382 19854 19854     0     8    0
> inpcb      328  5892193    0  5673850 18519   314 18205 18205     0     8    0
> sockpl     552  7621733    0  7403339 15972   364 15608 15608     0     8    0
> mbufpl     256   286232    0        0 13640     5 13635 13635     0     8    0

If I read this currectly the box has 20k+ TCP sockets open. Which results
in high resrouce usage of tcpcb, inpcb, sockpl and for the TCP template
mbufs.

At least the tcpcb and sockpl use the kmem_map.
Which is (19854 + 15608) * 4k or 141848K. Your kmem_map has a limit of 186616K
so there is just not enough space. You may need to increase memory or you
can also tune NKMEMPAGES via config(8).

> pfstate    384 16598777 5933960 16587284 239196 237883 1313 10001 0     8    0

There seems to be some strange bursts on the pfstate pool as well.

-- 
:wq Claudio

Reply via email to