On Thu, Jun 28, 2012 at 10:37 PM, Paul Gear <p...@gear.dyndns.org> wrote:

> Would i be better off virtualising this system on VMware?  That way i
> could handle all the VLAN tagging in the hypervisor, and the NIC
> presented to the system would be an Intel E1000 instead of a Broadcom.
> The VMware ESXi 5 and Linux drivers for these NICs are rock-solid in my
> experience.

Might be worth a try.


> My netstat -m output is shown below.  It's not even close to the limits.
>  Keep in mind that this is just a test system.  There is no traffic
> going through it.  Except for my initial configuration on Tuesday,
> basically nothing has been done to the box except for ping & SNMP from
> our NMS.

Probably not the problem then, I would think.


> Are you saying that you regularly run out of MBUFs and are forced to
> reboot when you do?

Before setting them to 131072, yes. Since making the change, my
longest uptime is 72 days, 15:43 with netstat -m:

71214/2006/73220 mbufs in use (current/cache/total)
71107/1045/72152/131072 mbuf clusters in use (current/cache/total/max)
71107/701 mbuf+clusters out of packet secondary zone in use (current/cache)
0/90/90/65536 4k (page size) jumbo clusters in use (current/cache/total/max)
12/1324/1336/32768 9k jumbo clusters in use (current/cache/total/max)
0/0/0/16384 16k jumbo clusters in use (current/cache/total/max)
184411K/15369K/199780K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/0/0 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines

I record the output of netstat -m daily, and even at 72 days uptime it
was still growing geometrically.

db
_______________________________________________
List mailing list
List@lists.pfsense.org
http://lists.pfsense.org/mailman/listinfo/list

Reply via email to