On 24/06/14 3:08 PM, Chris Cappuccio wrote:
Kapetanakis Giannis [bil...@edu.physics.uoc.gr] wrote:
On 23/06/14 21:33, Henning Brauer wrote:
* Chris Cappuccio <ch...@nmedia.net> [2014-06-23 20:24]:
I have a sandy bridge Xeon box with PF NAT that handles a daily 200
to 700Mbps. It has a single myx interface using OpenBSD 5.5 (not
current). It does nothing but PF NAT and related routing. No barage
of vlans or interfaces. No dynamic routing. Nothing else. 60,000 to
100,000 states.

With an MP kernel, kern.netlivelocks increases by something like 150,000
per day!! I The packet loss was notable.

With an SP kernel, the 'netlivelock' counter barely moves. Maybe 100 per
day on average, but for the past week, maybe 5.
as already said in private, I'm not seeing anything like that which
makes me wonder what is different for you.

Me neither

# uname -a
OpenBSD server 5.5 GENERIC.MP#156 i386


I'm using amd64...

sysctl -a|grep netlive
kern.netlivelocks=50

# pfctl -ss|wc -l
    73203

# pfctl -sr|wc -l
      294

routing/firewalling/some NAT at ~ 500Mbps

I have some ideas. I'm going to do some troubleshooting when I have a
chance to think clearly.

I think the disk subsystem could be part of the issue. I see the most
netlivelocks on a box with a USB key, mfi is in second place.

This reminds me of a system I had mentioned to you in the past.
Checking that system again I noticed since switching it from spinning
rust to SSDs that the number of livelocks seems to have gone down.

# sysctl -a | grep livelocks
kern.netlivelocks=4163
# uptime
 3:23PM  up 1 day, 45 mins, 1 user, load averages: 0.79, 0.91, 0.83
# sysctl -a | grep livelocks
kern.netlivelocks=4190
# uptime
 3:37PM  up 1 day, 59 mins, 1 user, load averages: 0.67, 0.99, 0.87

Before the switch that would be up into the tens of thousands by now.

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

Reply via email to