they basically prevent the return of mbuf cluster memory back to uvm. this means if you have a spike of socket activity, you can end up allocating a lot of memory that you can never use again.
the watermark was introduced to prevent clusters constantly churning back and forward with uvm. this happens because we tend to take a lot of packets off an rx ring and fill it again. then, shortly after the network stack processes these mbufs we free them. so we constantly allocate and free big batches of clusters. now that pools hold onto memory until its been idle for at least a second, these bursts are smoothed out and the watermarks arent necessary anymore. this also removes the hard limits on the cluster pools and moves it to the mbuf pool instead. the reason for this is we still need to limit the packets in the system somehow, but we have several cluster pools to draw from. some network drivers even provide their own cluster pools now. if we pull clusters from all the pools we overcommit. on the other hand, you cant have clusters without mbufs. if we limit mbufs we geenerally limit clusters too. ive been running this on my firewalls for many many months now, and they still go fast. ok? Index: uipc_mbuf.c =================================================================== RCS file: /cvs/src/sys/kern/uipc_mbuf.c,v retrieving revision 1.202 diff -u -p -r1.202 uipc_mbuf.c --- uipc_mbuf.c 14 Mar 2015 03:38:51 -0000 1.202 +++ uipc_mbuf.c 8 Apr 2015 01:03:42 -0000 @@ -125,8 +125,8 @@ void nmbclust_update(void); void m_zero(struct mbuf *); -const char *mclpool_warnmsg = - "WARNING: mclpools limit reached; increase kern.maxclusters"; +const char *mbufpl_warnmsg = + "WARNING: mbuf limit reached; increase kern.maxclusters"; /* * Initialize the mbuf allocator. @@ -168,25 +168,7 @@ mbinit(void) void nmbclust_update(void) { - int i; - /* - * Set the hard limit on the mclpools to the number of - * mbuf clusters the kernel is to support. Log the limit - * reached message max once a minute. - */ - for (i = 0; i < nitems(mclsizes); i++) { - (void)pool_sethardlimit(&mclpools[i], nmbclust, - mclpool_warnmsg, 60); - /* - * XXX this needs to be reconsidered. - * Setting the high water mark to nmbclust is too high - * but we need to have enough spare buffers around so that - * allocations in interrupt context don't fail or mclgeti() - * drivers may end up with empty rings. - */ - pool_sethiwat(&mclpools[i], nmbclust); - } - pool_sethiwat(&mbpool, nmbclust); + (void)pool_sethardlimit(&mbpool, nmbclust, mbufpl_warnmsg, 60); } /*