On 2018/12/18 11:34, Arnaud BRAND wrote:
> Hi,
> 
> I'm running 6.4 stable, with latest syspatches.
> 
> I saw ospf6d reporting this in the logs
> Dec 18 08:18:10 obsd64-ic1 ospf6d[68658]: send_packet: error sending packet
> on interface vmx1: No buffer space available
> 
> Searching the web, I gathered that netstat -m might shed some light, so I
> proceeded :
> obsd64-ic1# netstat -m
> 610 mbufs in use:
>         543 mbufs allocated to data
>         8 mbufs allocated to packet headers
>         59 mbufs allocated to socket names and addresses
> 13/200 mbuf 2048 byte clusters in use (current/peak)
> 0/30 mbuf 2112 byte clusters in use (current/peak)
> 1/56 mbuf 4096 byte clusters in use (current/peak)
> 0/48 mbuf 8192 byte clusters in use (current/peak)
> 475/2170 mbuf 9216 byte clusters in use (current/peak)
> 0/0 mbuf 12288 byte clusters in use (current/peak)
> 0/0 mbuf 16384 byte clusters in use (current/peak)
> 0/0 mbuf 65536 byte clusters in use (current/peak)
> 10196/23304/524288 Kbytes allocated to network (current/peak/max)
> 0 requests for memory denied
> 0 requests for memory delayed
> 0 calls to protocol drain routines
> 
> So if there were no requests denied or delayed and the peak was only 24MB
> out of 512MB max, what could cause ospf6d to complain ?
> Should I be worried about this message ?
> 
> Looking at the sendto man page I get that it can return ENOBUFS in two cases
> :
> Case 1 - The system was unable to allocate an internal buffer
> -> this seems to not be the case as shown above
> 
> This leaves only case 2 : The output queue for a network interface was full.
> 
> Looking at netstat -id is see drops on vmx1 and vmx3.
> Both of these cards are VMXNET3 cards connected to the different
> VLANs/Portgroups on the same vswitch which has two 10G uplinks to the
> switches.
> 
> sysctl | grep drops shows
> net.inet.ip.ifq.drops=0
> net.inet6.ip6.ifq.drops=0
> net.pipex.inq.drops=0
> net.pipex.outq.drops=0
> 
> I'm out of ideas for places where to look next.
> Please, could network guru provide some insight/help ?
> Or just tell me that it's not worth bothering and I should stop here ?
> 
> Thanks for your help and have a nice day !
> Arnaud

It maybe worth trying e1000/em(4). I had quite frequent panics with
vmx(4) (https://marc.info/?l=openbsd-bugs&w=2&r=1&s=vmxnet3_getbuf&q=b),
the same VM has been totally stable since switching to em(4).

Reply via email to