On 2026/02/13 10:18, K R wrote:
> 
> It runs about 10 network daemons serving TCP clients.  About 64-128
> open sockets each, at any given time.  Not much traffic, but around 4k
> pf states.

Yet it seems you ran into 100k states to be hitting PF state limits?
I wonder if it's worth scripting a check on the number of states and
dumping the state table (pfctl -ss -v at least) to get an idea what's in
there when it's high.


> The resources:
> 
> hw.model=Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz
> hw.vendor=VMware, Inc.
> hw.physmem=4277600256
> hw.ncpuonline=2
> 
> > > ddb{1}> show all pools
> > > Name      Size Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg Maxpg 
> > > Idle
> > > tcpcb      736  4309640  103  4091353 20236   382 19854 19854     0     8 
> > >    0
> > > inpcb      328  5892193    0  5673850 18519   314 18205 18205     0     8 
> > >    0
> > > sockpl     552  7621733    0  7403339 15972   364 15608 15608     0     8 
> > >    0
> > > mbufpl     256   286232    0        0 13640     5 13635 13635     0     8 
> > >    0
> >
> > If I read this currectly the box has 20k+ TCP sockets open. Which results
> > in high resrouce usage of tcpcb, inpcb, sockpl and for the TCP template
> > mbufs.
> 
> What I see now, using systat pool, sorted by Npage:
> 
> NAME             SIZE REQUESTS     FAIL    INUSE    PGREQ    PGREL
> NPAGE    HIWAT    MINPG    MAXPG
> tcpcb             736  1670128        0    40124     4548      308
> 4240     4297        0        8
> inpcb             328  2438004        0    40182     4150      247
> 3903     3944        0        8
> sockpl            552  3299804        0    40236     3621      255
> 3366     3385        0        8
> mbufpl            256 49530665        0    39963     2949        5
> 2944     2944        0        8
> 
> > At least the tcpcb and sockpl use the kmem_map.
> > Which is (19854 + 15608) * 4k or 141848K. Your kmem_map has a limit of 
> > 186616K
> > so there is just not enough space. You may need to increase memory or you
> > can also tune NKMEMPAGES via config(8).
> 
> I see.  It is odd, though, that we have similar machines (both VMs and
> baremetal, similar resources) and the only one that panics is this
> one, running under VMware.
> 
> >
> > > pfstate    384 16598777 5933960 16587284 239196 237883 1313 10001 0     8 
> > >    0
> >
> > There seems to be some strange bursts on the pfstate pool as well.
> >
> > --
> > :wq Claudio
> 

Reply via email to