On 2014-07-17, Darryl Wisneski <s...@commwebworks.com> wrote:
> netstat -s -p udp |grep "dropped due to full socket" 
> 345197 dropped due to full socket buffers

We're assuming this relates to openvpn packets but you can check which
sockets have queues: (the sendq/recvq counters).

netstat -an | grep -v ' 0      0'

So if things are building up here rather than on the interface queue,
there ought to be a reason why it's slow to drain.

Are you doing queueing?

How is fragmentation being handled? In OpenVPN or relying on the kernel
to do it? Or are you using small mtu anyway to avoid frags?

How does pfctl -si look?

> I'm not sure how to proceed on tuning as I read tuning via sysctl is
> becoming pointless.

It's preferred if things can auto-tune without touching sysctl, but not
everything is done that way.

> net.inet.udp.sendspace=131028   # udp send buffer

This may possibly need increasing though is already quite large. (while
researching this mail it seems FreeBSD doesn't have this, does anyone here
know what they do instead?)

> net.inet.ip.ifq.maxlen=1536

Monitor net.inet.ip.ifq.drops, is there an increase?
This is already a fairly large buffer though (especially as I think you
mentioned 100Mb). How did you choose 1536?

> kern.bufcachepercent=90         # kernel buffer cache memory percentage

This won't help OpenVPN. Is this box also doing other things?

How does kern.netlivelocks look? (monitor it over time rather than just
looking at the total).

> OpenBSD 5.5 (GENERIC.MP) #315: Wed Mar  5 09:37:46 MST 2014
>     dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> real mem = 34301018112 (32712MB)
> avail mem = 33379323904 (31833MB)

thanks for including the full dmesg.

Reply via email to