----- Original Message ----

> From: James Peltier <james_a_pelt...@yahoo.ca>
> To: misc@openbsd.org
> Sent: Tue, September 21, 2010 9:51:05 AM
> Subject: Re: em(4) ierrs
> 
> ----- Original Message ----
> 
> > From: James Peltier <james_a_pelt...@yahoo.ca>
> >  To: misc@openbsd.org
> > Cc: misc@openbsd.org
> > Sent: Tue, September  21, 2010 9:46:40 AM
> > Subject: Re: em(4) ierrs
> > 
> > -----  Original Message ----
> > 
> > > From: Joerg Goltermann <go...@openbsd.org>
> > > To:  Andre  Keller <a...@list.ak.cx>
> > > Cc: misc@openbsd.org
> > > Sent: Tue,  September  21, 2010 12:21:28 AM
> > > Subject: Re: em(4)  ierrs
> > > 
> > > On  20.09.2010 19:15, Andre Keller  wrote:
> > > > Hi
> > > >
> > >  >
> >  > > I  have some odd packet loss on a openbsd based router   (running 
>-current
> > > > as  of the beginning of  september....)  .
> > > >
> > > > The router has 6  physical  interfaces (all em,  Intel 82575EB), 4 of 
>them
> > >  > have traffic (about 10-20   Mbps).
> > > 
> > > which  packet rate do you expect on the interfaces? Do  you  see
> > >  livelocks (systat -b mbuf)?
> > > 
> > >   -   Joerg
> > 
> > 
> > livelocks are seen on my em interfaces as  well.  I also  have livelocks on 
>my 
>
> >far 
> >
> > less  busy bge1 management interface.  See  below
> > 
> >  IFACE             LIVELOCKS   SIZE  ALIVE   LWM   HWM   CWM
> > System                          256   116            84
> >                                  2k      92         504
> > lo0
> > em0                     29363     2k    37      4   256    37
> >  em1                     10174    2k    37      4   256     37
> > bge0
> > bge1                        4    2k     17     17   512     17
> > enc0
> > vlan300
> >  bridge0
> > pflog0
> > pflow0
> 
> 
> I should mention that these  might have been made prior to some recent 
> tuning.  
>
> However, for the  purpose of following this thread I will keep an eye on it 
> to 
>be 
>
> sure.
> 
>

I am in bridging mode and I too, am indeed seeing a slow increase in livelocks 
on my em0 interfaces.  Traffic has been quite low over the past week or so, so 
it certainly shouldn't be an issue.  The only modifications I have made thus 
far 
are to the net.inet.ip.ifq.maxlen bumped to 2048.  If you want any other info 
please let me know.


#sysctl -b mbuf
   1 users    Load 0.13 0.09 0.08                      Tue Sep 21 20:22:30 2010

IFACE             LIVELOCKS  SIZE ALIVE   LWM   HWM   CWM
System                        256    98          84
                               2k    74         504
lo0
em0                   29891    2k    29     4   256    29
em1                   10381    2k    28     4   256    28
bge0
bge1                      4    2k    17    17   512    17
enc0
vlan300
bridge0
pflog0
pflow0


# netstat -m
100 mbufs in use:
        95 mbufs allocated to data
        1 mbuf allocated to packet headers
        4 mbufs allocated to socket names and addresses
74/1008/6144 mbuf 2048 byte clusters in use (current/peak/max)
0/8/6144 mbuf 4096 byte clusters in use (current/peak/max)
0/8/6144 mbuf 8192 byte clusters in use (current/peak/max)
0/8/6144 mbuf 9216 byte clusters in use (current/peak/max)
0/8/6144 mbuf 12288 byte clusters in use (current/peak/max)
0/8/6144 mbuf 16384 byte clusters in use (current/peak/max)
0/8/6144 mbuf 65536 byte clusters in use (current/peak/max)
2544 Kbytes allocated to network (6% in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines
#

 ---
James A. Peltier     james_a_pelt...@yahoo.ca

Reply via email to