[igb] add DROP_EN to each RX queue config if TX flow control is disabled

2014-09-08 Thread Adrian Chadd
Hi,

This patch enables the DROP_EN flag to each RX queue if TX flow
control is disabled. It (mostly) mirrors what the ixgbe driver does.

This prevents a single full RX ring from stalling the rest of the RX rings.

How's it look? (indenting and such aside, thanks google.)



-a

Index: sys/dev/e1000/if_igb.c
===
--- sys/dev/e1000/if_igb.c (revision 271290)
+++ sys/dev/e1000/if_igb.c (working copy)
@@ -4712,6 +4712,18 @@
  rctl |= E1000_RCTL_SZ_2048;
  }

+ /*
+ * If TX flow control is disabled and there's >1 queue defined,
+ * enable DROP.
+ *
+ * This drops frames rather than hanging the RX MAC for all queues.
+ */
+ if ((adapter->num_queues > 1) &&
+ (adapter->fc == e1000_fc_none ||
+ adapter->fc == e1000_fc_rx_pause)) {
+srrctl |= E1000_SRRCTL_DROP_EN;
+ }
+
  /* Setup the Base and Length of the Rx Descriptor Rings */
  for (int i = 0; i < adapter->num_queues; i++, rxr++) {
  u64 bus_addr = rxr->rxdma.dma_paddr;
@@ -6255,6 +6267,7 @@

  adapter->hw.fc.current_mode = adapter->hw.fc.requested_mode;
  e1000_force_mac_fc(&adapter->hw);
+ /* XXX TODO: update DROP_EN on each RX queue if appropriate */
  return (error);
 }
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: ixgbe CRITICAL: ECC ERROR!! Please Reboot!!

2014-09-08 Thread Adrian Chadd
Hi,

A bunch of us spent a whole bunch of time on the driver before and
after 10.0-REL happened to squish a number of ixgbe hanging and out of
order bugs.

Please update. :)


-a
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: ixgbe CRITICAL: ECC ERROR!! Please Reboot!!

2014-09-08 Thread Marcelo Gondim

On 08/09/2014 17:05, Ing. Aleš Nechvátal wrote:

I was experiencing similar behaviour on a machine acting as a gateway. It
has one X520-SR2, both SPF+ ports used to connet to another machines with
the same NICs. One of the NICs suddenly stopped to pass data through and the
the ifconfing down up cycle was needed to make it work again, that in better
case. In the worse case, the machine rebooted without leaving any message in
syslog (it's on a remote site). This happened in intervals of 5 mins. up to
1 hour (I am not sure about the exact correlation between traffic going
through and the number of these events, but during traffic peaks, it seems
to have happened more often).

First I checked the mbufs, but didn't see any problem. Then I tried to
disable TSO (on both sides of the links), but without observing any
improvement.
I was playing with different versions of the kernel, the newer the kernel (I
went up to 9.3 STABLE, didn't try 10 or HEAD) the more instability. It
seemed, that the most stable was 9.1 PRERELEASE with 2.4.8 driver (at least
I didn't see reboots, a cron script could keep the gateway passing data by
shutting down and bringing up the interfaces).  Now I am running 9.3 RELEASE
r271227M with the 2.5.25 driver from Intel's website. This configuration was
not stable either, until I discovered high interrupt rate on each queue (the
sum on all queues reached over 400k in some moments). I turned off the
limit of interrupts by setting hw.intr_storm_threshold=0 (previously set to
15). Since the following reboot (involuntary, btw.) the gateway keeps
running smoothly for 7 hours now with peak throughput of approx. 1.6 Gbit/s.

The sum of interrupt rates on all your queues looks quite high too, so there
could be some similarity and disabling the mentioned limit could help.

My sysctl.conf:
kern.ipc.somaxconn=1024
net.inet.icmp.icmplim=100
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
net.inet.tcp.sendspace=262144
net.inet.tcp.recvspace=262144
net.inet.ip.fastforwarding=1
kern.ipc.maxsockbuf=16777216
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
hw.intr_storm_threshold=0

#sysctl -a dev.ix.1
I changed the value of "hw.intr_storm_threshold" to 0 and yet the 
interface stopped working.  :(


my sysctl.conf:

net.inet.ip.forwarding=1
net.inet.ip.fastforwarding=1
net.inet6.ip6.forwarding=1
kern.ipc.somaxconn=4096
net.inet.tcp.syncookies=1
net.inet.ip.redirect=1
net.inet.ip.accept_sourceroute=0
net.inet.ip.sourceroute=0
net.inet.tcp.drop_synfin=1
net.inet.udp.blackhole=1
net.inet.tcp.blackhole=2
security.bsd.see_other_uids=0
net.inet.ip.fw.dyn_buckets=65536
net.inet.ip.fw.dyn_max=65536
hw.intr_storm_threshold=0
net.inet.ip.dummynet.pipe_slot_limit=800
net.inet.icmp.icmplim=2000

Cheers,


dev.ix.1.%desc: Intel(R) PRO/10GbE PCI-Express Network Driver, Version -
2.5.25
dev.ix.1.%driver: ix
dev.ix.1.%location: slot=0 function=1
dev.ix.1.%pnpinfo: vendor=0x8086 device=0x10fb subvendor=0x8086
subdevice=0x7a11 class=0x02
dev.ix.1.%parent: pci1
dev.ix.1.fc: 3
dev.ix.1.enable_aim: 1
dev.ix.1.advertise_speed: 0
dev.ix.1.dropped: 0
dev.ix.1.mbuf_defrag_failed: 0
dev.ix.1.watchdog_events: 0
dev.ix.1.link_irq: 2
dev.ix.1.queue0.interrupt_rate: 31250
dev.ix.1.queue0.irqs: 152182132
dev.ix.1.queue0.txd_head: 526
dev.ix.1.queue0.txd_tail: 526
dev.ix.1.queue0.tso_tx: 2
dev.ix.1.queue0.no_tx_dma_setup: 0
dev.ix.1.queue0.no_desc_avail: 119373
dev.ix.1.queue0.tx_packets: 274418209
dev.ix.1.queue0.rxd_head: 441
dev.ix.1.queue0.rxd_tail: 440
dev.ix.1.queue0.rx_packets: 10385
dev.ix.1.queue0.rx_bytes: 25483398737
dev.ix.1.queue0.rx_copies: 87237192
dev.ix.1.queue0.lro_queued: 0
dev.ix.1.queue0.lro_flushed: 0
dev.ix.1.queue1.interrupt_rate: 31250
dev.ix.1.queue1.irqs: 152973172
dev.ix.1.queue1.txd_head: 1282
dev.ix.1.queue1.txd_tail: 1282
dev.ix.1.queue1.tso_tx: 50
dev.ix.1.queue1.no_tx_dma_setup: 0
dev.ix.1.queue1.no_desc_avail: 114840
dev.ix.1.queue1.tx_packets: 271827311
dev.ix.1.queue1.rxd_head: 495
dev.ix.1.queue1.rxd_tail: 494
dev.ix.1.queue1.rx_packets: 103292003
dev.ix.1.queue1.rx_bytes: 20681070454
dev.ix.1.queue1.rx_copies: 89490795
dev.ix.1.queue1.lro_queued: 0
dev.ix.1.queue1.lro_flushed: 0
dev.ix.1.queue2.interrupt_rate: 21739
dev.ix.1.queue2.irqs: 81502157
dev.ix.1.queue2.txd_head: 313
dev.ix.1.queue2.txd_tail: 313
dev.ix.1.queue2.tso_tx: 50
dev.ix.1.queue2.no_tx_dma_setup: 0
dev.ix.1.queue2.no_desc_avail: 111923
dev.ix.1.queue2.tx_packets: 23189156
dev.ix.1.queue2.rxd_head: 349
dev.ix.1.queue2.rxd_tail: 348
dev.ix.1.queue2.rx_packets: 94457797
dev.ix.1.queue2.rx_bytes: 18053240855
dev.ix.1.queue2.rx_copies: 82430878
dev.ix.1.queue2.lro_queued: 0
dev.ix.1.queue2.lro_flushed: 0
dev.ix.1.queue3.interrupt_rate: 8
dev.ix.1.queue3.irqs: 81678068
dev.ix.1.queue3.txd_head: 1990
dev.ix.1.queue3.txd_tail: 1990
dev.ix.1.queue3.tso_tx: 3
dev.ix.1.queue3.no_tx_dma_setup: 0
dev.ix.1.queue3.no_desc_avail: 110545
dev.ix.1.queue3.tx_packets: 21258249
dev.ix.1.queue3.rxd_head: 1544
dev.ix.1.q

Re: [RFC] Patch to improve TSO limitation formula in general

2014-09-08 Thread Eric Joyner
Let me remove my concerns earlier in the thread -- this patch won't
negatively affect any of our drivers; and the problem I mentioned with ixl
would require a change somewhere further up the stack.

---
- Eric Joyner

On Mon, Sep 8, 2014 at 5:05 AM, Rick Macklem  wrote:

> Hans Petter Selasky wrote:
> > On 09/06/14 00:09, Rick Macklem wrote:
> > > Hans Petter Selesky wrote:
> > >> On 09/05/14 23:19, Eric Joyner wrote:
> > >>> There are some concerns if we use this with devices that ixl
> > >>> supports:
> > >>>
> > >>> - The maximum fragment size is 16KB-1, which isn't a power of 2.
> > >>>
> > >>
> > >> Hi Eric,
> > >>
> > >> Multiplying by powers of two are more fast, than non-powers of
> > >> two.
> > >> So
> > >> in this case you would have to use 8KB as a maximum.
> > >>
> > > Well, I'm no architecture expert, but I really doubt the CPU delay
> > > of a
> > > non-power of 2 multiply/divide is significant related to doing
> > > smaller
> > > TSO segments. Long ago (as in 1970s) I did work on machines where
> > > shifts
> > > for power of 2 multiply/divide was preferable, but these days I
> > > doubt it
> > > is going to matter??
> > >
> > >>> - You can't get the maximum TSO size for ixl devices by
> > >>> multiplying
> > >>> the
> > >>> maximum number of fragments by the maximum size.
> > >>> Instead the number of fragments is AFAIK unlimited, but a segment
> > >>> can only
> > >>> span 8 mbufs (including the [up to 3] mbufs containing the
> > >>> header),
> > >>> and the
> > >>> maximum TSO size is 256KB.
> >
> > Hi,
> >
> > Maybe that can be a separate parameter?
> >
> > I see that your patch assumes that a segment can be any-length. That
> > is
> > not always the case. Remember there are JUMBO mbufs too.
> >
> I thought JUMBO mbufs were only going to be used on the receive side,
> however I suppose if a packet is received into a JUMBO mbuf and then
> forwarded on another interface...
>
> > With my patch, the maximum segment size is a separate parameter. The
> > total number of TSO bytes is then not so useful.
> >
> Well, if you are referring to if_hw_tsomax, I'm not sure it was the
> best plan to begin with. It was implemented for xen and I'm not sure
> that any other driver uses it as of now.
>
> However...
> I'm not a network/hardware guy, but it seems some devices do have
> the IP_MAXPACKET limit (use the ip_len field in the ip header to
> know how large the TSO segment is). There is also at least one device
> (82598 chip for "ix" driver) that can handle more that IP_MAXPACKET,
> so it seems appropriate to have a value that the driver can set.
>
> Since the maximum size of the gather list for transmit does seem to
> vary a lot between devices, with several handling less than 35, it
> does seem appropriate to allow drivers to specify that.
>
> If devices can't handle a single gather entry over a certain size,
> I think that does need to be specified along with the max size of
> the gather list, since the driver will need to use multiple gather
> entries for a single mbuf and tcp_output() should take that into
> account.
>
> In summary, yep, I basically agree.
>
> rick
> ps: Who will look at your patch soon.
>
> > --HPS
> >
> > ___
> > freebsd-curr...@freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-current
> > To unsubscribe, send any mail to
> > "freebsd-current-unsubscr...@freebsd.org"
> >
>
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Patches for RFC6937 and draft-ietf-tcpm-newcwv-00

2014-09-08 Thread Tom Jones
Hi Folks,


On Thu, Aug 28, 2014 at 12:39:09PM -0400, George Neville-Neil wrote:
> Adrian,
> 
> Can you put this into a Phabricator for review?
> 
> Lars,
> 
> How have you been testing this?

Did the newcwv patch make its way into Phabricator? I don't think I would have
seen if it did.

> 
> On 27 Aug 2014, at 4:01, Eggert, Lars wrote:
> 
> > Yep
> >
> > On 2014-8-27, at 9:53, Adrian Chadd  wrote:
> >
> >> Ok. Is it the same patch you sent out in Feb?
> >>
> >>
> >> -a
> >>
> >>
> >> On 27 August 2014 00:43, Eggert, Lars  wrote:
> >>> Not as far as I know.
> >>>
> >>> Lars
> >>>
> >>> On 2014-8-27, at 9:39, Adrian Chadd  wrote:
> >>>
>  Is there a PR for it?
> 
> 
>  -a
> 
> 
>  On 27 August 2014 00:23, Eggert, Lars  wrote:
> > It would be great if people could also review Aris' PRR patch - RFC6937 
> > has been out for a while.
> >
> > Lars
> >
> >
> >
> >
> > On 2014-8-26, at 20:09, Adrian Chadd  wrote:
> >
> >> Hi!
> >>
> >> I'm going to merge Tom's work in a week unless someone gives me a
> >> really good reason not to.
> >>
> >> I think there's been enough work and discussion about it since the
> >> first post from Lars in Feburary and enough review opportunity.
> >>
> >>
> >> -a
> >>
> >>
> >> On 26 August 2014 07:55, Tom Jones  wrote:
> >>> On Tue, Aug 26, 2014 at 02:43:49PM +, Eggert, Lars wrote:
>  Hi,
> 
>  the newcwv patch is probably stale now with Tom Jones' recent patch 
>  based on
>  a more up-to-date version of the Internet-Draft, but the PRR patch 
>  should
>  still be useful?
> >>>
> >>> My newcwv patch is much more up to date than Aris's, but it is 
> >>> slightly
> >>> different in implementation. I have had a few suggestions from 
> >>> Adrian, but he
> >>> couldn't comment on how it relates to the tcp internals.
> >>>
> >>> There is a PR: 
> >>> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=191520
> >>>
> >>> The biggest difference in structure between mine and Aris's patch is 
> >>> the use of
> >>> tcp timers. It would be good to hear if my approach or Aris's is 
> >>> prefered.
> >>>
>  On 2014-6-19, at 23:35, George Neville-Neil  
>  wrote:
> 
> > On 4 Feb 2014, at 1:38, Eggert, Lars wrote:
> >
> >> Hi,
> >>
> >> below are two patches that implement RFC6937 ("Proportional Rate 
> >> Reduction for TCP") and draft-ietf-tcpm-newcwv-00 ("Updating TCP 
> >> to support Rate-Limited Traffic"). They were done by Aris 
> >> Angelogiannopoulos for his MS thesis, which is at 
> >> https://eggert.org/students/angelogiannopoulos-thesis.pdf.
> >>
> >> The patches should apply to -CURRENT as of Sep 17, 2013. (Sorry 
> >> for the delay in sending them, we'd been trying to get some 
> >> feedback from committers first, without luck.)
> >>
> >> Please note that newcwv is still a work in progress in the IETF, 
> >> and the patch has some limitations with regards to the "pipeACK 
> >> Sampling Period" mentioned in the Internet-Draft. Aris says this 
> >> in his thesis about what exactly he implemented:
> >>
> >> "The second implementation choice, is in regards with the 
> >> measurement of pipeACK. This variable is the most important 
> >> introduced by the method and is used to compute the phase that the 
> >> sender currently lies in. In order to compute pipeACK the approach 
> >> suggested by the Internet Draft (ID) is followed [ncwv]. During 
> >> initialization, pipeACK is set to the maximum possible value. A 
> >> helper variable prevHighACK is introduced that is initialized to 
> >> the initial sequence number (iss). prevHighACK holds the value of 
> >> the highest acknowledged byte so far. pipeACK is measured once per 
> >> RTT meaning that when an ACK covering prevHighACK is received, 
> >> pipeACK becomes the difference between the current ACK and 
> >> prevHighACK. This is called a pipeACK sample.  A newer version of 
> >> the draft suggests that multiple pipeACK samples can be used 
> >> during the pipeACK sampling period."
> >>
> >> Lars
> >>
> >>
> >> [prr.patch]
> >>
> >> [newcwv.patch]
> >
> > Apologies for not looking at this as yet.  It is now closer to the 
> > top of my list.
> >
> > Best,
> > George
> 
> >>>
> >>>
> >>>
> >>> --
> >>> Tom
> >>> @adventureloop
> >>> adventurist.me
> >>>
> >>> :wq
> >>> ___
> >>> freebsd-net@freebsd.org mailing list
> >

Re: ixgbe(4) spin lock held too long

2014-09-08 Thread Sean Bruno
On Mon, 2014-09-08 at 15:34 -0400, Eric van Gyzen wrote:
> >> Unread portion of the kernel message buffer:
> >> spin lock 0x812a0400 (callout) held by 0xf800151fe000
> (tid
> >> 13) too long
> 
> TID 13 is usually a kernel idle thread, which would seem to
> indicate
> a dangling lock.  Can you enable WITNESS (without WITNESS_SKIPSPIN) on
> this box? 

Will do.  I'll report back when we get a crash with WITNESS

sean

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


RE: ixgbe CRITICAL: ECC ERROR!! Please Reboot!!

2014-09-08 Thread Ing. Aleš Nechvátal
I was experiencing similar behaviour on a machine acting as a gateway. It
has one X520-SR2, both SPF+ ports used to connet to another machines with
the same NICs. One of the NICs suddenly stopped to pass data through and the
the ifconfing down up cycle was needed to make it work again, that in better
case. In the worse case, the machine rebooted without leaving any message in
syslog (it's on a remote site). This happened in intervals of 5 mins. up to
1 hour (I am not sure about the exact correlation between traffic going
through and the number of these events, but during traffic peaks, it seems
to have happened more often).

First I checked the mbufs, but didn't see any problem. Then I tried to
disable TSO (on both sides of the links), but without observing any
improvement.
I was playing with different versions of the kernel, the newer the kernel (I
went up to 9.3 STABLE, didn't try 10 or HEAD) the more instability. It
seemed, that the most stable was 9.1 PRERELEASE with 2.4.8 driver (at least
I didn't see reboots, a cron script could keep the gateway passing data by
shutting down and bringing up the interfaces).  Now I am running 9.3 RELEASE
r271227M with the 2.5.25 driver from Intel's website. This configuration was
not stable either, until I discovered high interrupt rate on each queue (the
sum on all queues reached over 400k in some moments). I turned off the
limit of interrupts by setting hw.intr_storm_threshold=0 (previously set to
15). Since the following reboot (involuntary, btw.) the gateway keeps
running smoothly for 7 hours now with peak throughput of approx. 1.6 Gbit/s.

The sum of interrupt rates on all your queues looks quite high too, so there
could be some similarity and disabling the mentioned limit could help.

My sysctl.conf:
kern.ipc.somaxconn=1024
net.inet.icmp.icmplim=100
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
net.inet.tcp.sendspace=262144
net.inet.tcp.recvspace=262144
net.inet.ip.fastforwarding=1
kern.ipc.maxsockbuf=16777216
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
hw.intr_storm_threshold=0

#sysctl -a dev.ix.1

dev.ix.1.%desc: Intel(R) PRO/10GbE PCI-Express Network Driver, Version -
2.5.25
dev.ix.1.%driver: ix
dev.ix.1.%location: slot=0 function=1
dev.ix.1.%pnpinfo: vendor=0x8086 device=0x10fb subvendor=0x8086
subdevice=0x7a11 class=0x02
dev.ix.1.%parent: pci1
dev.ix.1.fc: 3
dev.ix.1.enable_aim: 1
dev.ix.1.advertise_speed: 0
dev.ix.1.dropped: 0
dev.ix.1.mbuf_defrag_failed: 0
dev.ix.1.watchdog_events: 0
dev.ix.1.link_irq: 2
dev.ix.1.queue0.interrupt_rate: 31250
dev.ix.1.queue0.irqs: 152182132
dev.ix.1.queue0.txd_head: 526
dev.ix.1.queue0.txd_tail: 526
dev.ix.1.queue0.tso_tx: 2
dev.ix.1.queue0.no_tx_dma_setup: 0
dev.ix.1.queue0.no_desc_avail: 119373
dev.ix.1.queue0.tx_packets: 274418209
dev.ix.1.queue0.rxd_head: 441
dev.ix.1.queue0.rxd_tail: 440
dev.ix.1.queue0.rx_packets: 10385
dev.ix.1.queue0.rx_bytes: 25483398737
dev.ix.1.queue0.rx_copies: 87237192
dev.ix.1.queue0.lro_queued: 0
dev.ix.1.queue0.lro_flushed: 0
dev.ix.1.queue1.interrupt_rate: 31250
dev.ix.1.queue1.irqs: 152973172
dev.ix.1.queue1.txd_head: 1282
dev.ix.1.queue1.txd_tail: 1282
dev.ix.1.queue1.tso_tx: 50
dev.ix.1.queue1.no_tx_dma_setup: 0
dev.ix.1.queue1.no_desc_avail: 114840
dev.ix.1.queue1.tx_packets: 271827311
dev.ix.1.queue1.rxd_head: 495
dev.ix.1.queue1.rxd_tail: 494
dev.ix.1.queue1.rx_packets: 103292003
dev.ix.1.queue1.rx_bytes: 20681070454
dev.ix.1.queue1.rx_copies: 89490795
dev.ix.1.queue1.lro_queued: 0
dev.ix.1.queue1.lro_flushed: 0
dev.ix.1.queue2.interrupt_rate: 21739
dev.ix.1.queue2.irqs: 81502157
dev.ix.1.queue2.txd_head: 313
dev.ix.1.queue2.txd_tail: 313
dev.ix.1.queue2.tso_tx: 50
dev.ix.1.queue2.no_tx_dma_setup: 0
dev.ix.1.queue2.no_desc_avail: 111923
dev.ix.1.queue2.tx_packets: 23189156
dev.ix.1.queue2.rxd_head: 349
dev.ix.1.queue2.rxd_tail: 348
dev.ix.1.queue2.rx_packets: 94457797
dev.ix.1.queue2.rx_bytes: 18053240855
dev.ix.1.queue2.rx_copies: 82430878
dev.ix.1.queue2.lro_queued: 0
dev.ix.1.queue2.lro_flushed: 0
dev.ix.1.queue3.interrupt_rate: 8
dev.ix.1.queue3.irqs: 81678068
dev.ix.1.queue3.txd_head: 1990
dev.ix.1.queue3.txd_tail: 1990
dev.ix.1.queue3.tso_tx: 3
dev.ix.1.queue3.no_tx_dma_setup: 0
dev.ix.1.queue3.no_desc_avail: 110545
dev.ix.1.queue3.tx_packets: 21258249
dev.ix.1.queue3.rxd_head: 1544
dev.ix.1.queue3.rxd_tail: 1543
dev.ix.1.queue3.rx_packets: 99140619
dev.ix.1.queue3.rx_bytes: 19476001717
dev.ix.1.queue3.rx_copies: 86228489
dev.ix.1.queue3.lro_queued: 0
dev.ix.1.queue3.lro_flushed: 0
dev.ix.1.mac_stats.crc_errs: 0
dev.ix.1.mac_stats.ill_errs: 0
dev.ix.1.mac_stats.byte_errs: 0
dev.ix.1.mac_stats.short_discards: 0
dev.ix.1.mac_stats.local_faults: 0
dev.ix.1.mac_stats.remote_faults: 2
dev.ix.1.mac_stats.rec_len_errs: 0
dev.ix.1.mac_stats.xon_txd: 0
dev.ix.1.mac_stats.xon_recvd: 0
dev.ix.1.mac_stats.xoff_txd: 0
dev.ix.1.mac_stats.xoff_recvd: 0
dev.ix.1.mac_stats.total_octets_rcvd: 85345731961
dev.ix.1.mac_stats.good_octets_rcvd: 8534573196

Re: When to use and not use divert/natd ...

2014-09-08 Thread John Nielsen
On Sep 5, 2014, at 9:15 PM, John Case  wrote:

> For many years I would build FreeBSD firewalls and they would be very, very 
> simple - I just set gateway_enable="yes" in rc.conf and everything just 
> worked.
> 
> However, these firewalls *always* had real, routable IPs no both sides. Both 
> interfaces had real, routable IPs.
> 
> Now I have a firewall that has two non-routable IPs for its interfaces, and 
> is connected to a internet router with the real IP.  When I try to builda  
> very simple firewall  it does not work, and I am forced to use ipdivert and 
> natd.
> 
> If I use ipdivert and natd, it works just fine.
> 
> So, am I correct that I can create a simple gateway without natd/divert as 
> long as both interfaces are real IPs, but if both interfaces are non-routable 
> IPs, I am forced to use divert/natd ?

Just think about the 'routing' aspect. In your current scenario it sounds like 
the Internet-connected device is doing NAT. It knows about its public IP and 
its private subnet. It sounds like you have a second private subnet behind your 
FreeBSD machine about which the Internet-connected device knows nothing. For 
packets to get from the Internet-connected device to your second subnet one of 
two things needs to happen:
 1) The Internet-connected device has a static route to the second subnet (so 
it knows to use your FreeBSD machine as the gateway), or
 2) The FreeBSD machine performs NAT (a second time), so the Internet-connected 
device send traffic to it even though it knows nothing about the subnet behind 
it.

I would prefer 1) as it's simpler and double-NAT isn't generally a good thing. 
However, if you don't have a way to add a route to the Internet-connected 
device then 2) isn't necessarily bad.

In your previous all-routable-IPs setups something was presumably advertising 
the route for you. The new setup isn't much different in principle.

JN

PS: Using the in-kernel NAT with IPFW is simpler and more efficient than using 
natd...

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: ixgbe(4) spin lock held too long

2014-09-08 Thread Eric van Gyzen
On 09/08/2014 15:19, Sean Bruno wrote:
> On Mon, 2014-09-08 at 12:09 -0700, Sean Bruno wrote:
>> This sort of looks like the hardware failed to respond to us in time?
>> Too busy?
>>
>> sean
>>
> This seems to be affecting my 10/stable machines from 15Aug2014.  
>
> Not a lot of churn in the code so I don't think this is new.  The
> afflicted machines, quite a few by my count, appear to have not been
> super busy (pushing about 200 Mb/s).
>
> sean
>
>
>
>> panic: spin lock held too long
>>
>> GNU gdb 6.1.1 [FreeBSD]
>> Copyright 2004 Free Software Foundation, Inc.
>> GDB is free software, covered by the GNU General Public License, and you
>> are
>> welcome to change it and/or distribute copies of it under certain
>> conditions.
>> Type "show copying" to see the conditions.
>> There is absolutely no warranty for GDB.  Type "show warranty" for
>> details.
>> This GDB was configured as "amd64-marcel-freebsd"...
>>
>> Unread portion of the kernel message buffer:
>> spin lock 0x812a0400 (callout) held by 0xf800151fe000 (tid
>> 13) too long

TID 13 is usually a kernel idle thread, which would seem to indicate
a dangling lock.  Can you enable WITNESS (without WITNESS_SKIPSPIN) on
this box?

>> panic: spin lock held too long
>> cpuid = 4
>> KDB: stack backtrace:
>> db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
>> 0xfe1f292752b0
>> panic() at panic+0x155/frame 0xfe1f29275330
>> _mtx_lock_spin_cookie() at _mtx_lock_spin_cookie+0x243/frame
>> 0xfe1f29275390
>> callout_lock() at callout_lock+0xd4/frame 0xfe1f292753d0
>> callout_reset_sbt_on() at callout_reset_sbt_on+0x10b/frame
>> 0xfe1f29275450
>> tcp_timer_activate() at tcp_timer_activate+0xe7/frame 0xfe1f29275470
>> tcp_do_segment() at tcp_do_segment+0x96/frame 0xfe1f292755b0
>> tcp_input() at tcp_input+0xeed/frame 0xfe1f292756f0
>> ip_input() at ip_input+0x97/frame 0xfe1f29275740
>> netisr_dispatch_src() at netisr_dispatch_src+0x62/frame
>> 0xfe1f292757b0
>> ether_demux() at ether_demux+0x126/frame 0xfe1f292757e0
>> ether_nh_input() at ether_nh_input+0x349/frame 0xfe1f29275840
>> netisr_dispatch_src() at netisr_dispatch_src+0x62/frame
>> 0xfe1f292758b0
>> tcp_lro_flush() at tcp_lro_flush+0x198/frame 0xfe1f292758d0
>> ixgbe_rxeof() at ixgbe_rxeof+0x6b3/frame 0xfe1f29275990
>> ixgbe_msix_que() at ixgbe_msix_que+0xba/frame 0xfe1f292759e0
>> intr_event_execute_handlers() at intr_event_execute_handlers+0xab/frame
>> 0xfe1f29275a20
>> ithread_loop() at ithread_loop+0x96/frame 0xfe1f29275a70
>> fork_exit() at fork_exit+0x9a/frame 0xfe1f29275ab0
>> fork_trampoline() at fork_trampoline+0xe/frame 0xfe1f29275ab0
>> --- trap 0, rip = 0, rsp = 0xfe1f29275b70, rbp = 0 ---
>> Uptime: 8d20h4m58s
>>
>
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: ixgbe(4) spin lock held too long

2014-09-08 Thread Sean Bruno
On Mon, 2014-09-08 at 12:09 -0700, Sean Bruno wrote:
> This sort of looks like the hardware failed to respond to us in time?
> Too busy?
> 
> sean
> 
This seems to be affecting my 10/stable machines from 15Aug2014.  

Not a lot of churn in the code so I don't think this is new.  The
afflicted machines, quite a few by my count, appear to have not been
super busy (pushing about 200 Mb/s).

sean



> 
> panic: spin lock held too long
> 
> GNU gdb 6.1.1 [FreeBSD]
> Copyright 2004 Free Software Foundation, Inc.
> GDB is free software, covered by the GNU General Public License, and you
> are
> welcome to change it and/or distribute copies of it under certain
> conditions.
> Type "show copying" to see the conditions.
> There is absolutely no warranty for GDB.  Type "show warranty" for
> details.
> This GDB was configured as "amd64-marcel-freebsd"...
> 
> Unread portion of the kernel message buffer:
> spin lock 0x812a0400 (callout) held by 0xf800151fe000 (tid
> 13) too long
> panic: spin lock held too long
> cpuid = 4
> KDB: stack backtrace:
> db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
> 0xfe1f292752b0
> panic() at panic+0x155/frame 0xfe1f29275330
> _mtx_lock_spin_cookie() at _mtx_lock_spin_cookie+0x243/frame
> 0xfe1f29275390
> callout_lock() at callout_lock+0xd4/frame 0xfe1f292753d0
> callout_reset_sbt_on() at callout_reset_sbt_on+0x10b/frame
> 0xfe1f29275450
> tcp_timer_activate() at tcp_timer_activate+0xe7/frame 0xfe1f29275470
> tcp_do_segment() at tcp_do_segment+0x96/frame 0xfe1f292755b0
> tcp_input() at tcp_input+0xeed/frame 0xfe1f292756f0
> ip_input() at ip_input+0x97/frame 0xfe1f29275740
> netisr_dispatch_src() at netisr_dispatch_src+0x62/frame
> 0xfe1f292757b0
> ether_demux() at ether_demux+0x126/frame 0xfe1f292757e0
> ether_nh_input() at ether_nh_input+0x349/frame 0xfe1f29275840
> netisr_dispatch_src() at netisr_dispatch_src+0x62/frame
> 0xfe1f292758b0
> tcp_lro_flush() at tcp_lro_flush+0x198/frame 0xfe1f292758d0
> ixgbe_rxeof() at ixgbe_rxeof+0x6b3/frame 0xfe1f29275990
> ixgbe_msix_que() at ixgbe_msix_que+0xba/frame 0xfe1f292759e0
> intr_event_execute_handlers() at intr_event_execute_handlers+0xab/frame
> 0xfe1f29275a20
> ithread_loop() at ithread_loop+0x96/frame 0xfe1f29275a70
> fork_exit() at fork_exit+0x9a/frame 0xfe1f29275ab0
> fork_trampoline() at fork_trampoline+0xe/frame 0xfe1f29275ab0
> --- trap 0, rip = 0, rsp = 0xfe1f29275b70, rbp = 0 ---
> Uptime: 8d20h4m58s
> 
> 
> ___
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


ixgbe(4) spin lock held too long

2014-09-08 Thread Sean Bruno
This sort of looks like the hardware failed to respond to us in time?
Too busy?

sean


panic: spin lock held too long

GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you
are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for
details.
This GDB was configured as "amd64-marcel-freebsd"...

Unread portion of the kernel message buffer:
spin lock 0x812a0400 (callout) held by 0xf800151fe000 (tid
13) too long
panic: spin lock held too long
cpuid = 4
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
0xfe1f292752b0
panic() at panic+0x155/frame 0xfe1f29275330
_mtx_lock_spin_cookie() at _mtx_lock_spin_cookie+0x243/frame
0xfe1f29275390
callout_lock() at callout_lock+0xd4/frame 0xfe1f292753d0
callout_reset_sbt_on() at callout_reset_sbt_on+0x10b/frame
0xfe1f29275450
tcp_timer_activate() at tcp_timer_activate+0xe7/frame 0xfe1f29275470
tcp_do_segment() at tcp_do_segment+0x96/frame 0xfe1f292755b0
tcp_input() at tcp_input+0xeed/frame 0xfe1f292756f0
ip_input() at ip_input+0x97/frame 0xfe1f29275740
netisr_dispatch_src() at netisr_dispatch_src+0x62/frame
0xfe1f292757b0
ether_demux() at ether_demux+0x126/frame 0xfe1f292757e0
ether_nh_input() at ether_nh_input+0x349/frame 0xfe1f29275840
netisr_dispatch_src() at netisr_dispatch_src+0x62/frame
0xfe1f292758b0
tcp_lro_flush() at tcp_lro_flush+0x198/frame 0xfe1f292758d0
ixgbe_rxeof() at ixgbe_rxeof+0x6b3/frame 0xfe1f29275990
ixgbe_msix_que() at ixgbe_msix_que+0xba/frame 0xfe1f292759e0
intr_event_execute_handlers() at intr_event_execute_handlers+0xab/frame
0xfe1f29275a20
ithread_loop() at ithread_loop+0x96/frame 0xfe1f29275a70
fork_exit() at fork_exit+0x9a/frame 0xfe1f29275ab0
fork_trampoline() at fork_trampoline+0xe/frame 0xfe1f29275ab0
--- trap 0, rip = 0, rsp = 0xfe1f29275b70, rbp = 0 ---
Uptime: 8d20h4m58s


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: ixgbe CRITICAL: ECC ERROR!! Please Reboot!!

2014-09-08 Thread Marcelo Gondim

On 08/09/2014 15:38, Eric Joyner wrote:

Getting local / remote faults is strange; are these stats from after you
rebooted?


When the crash happens I just do:

# ifconfig ix0 down; ifconfig ix0 up; ifconfig ix1 down; ifconfig up ixi1

After that the two interface ports return to work.

In no time I had to restart the server.

# uptime
 3:51PM  up 11 days, 15:44, 2 users, load averages: 5.36, 5.68, 5.69

Tomorrow will be going to the Datacenter to check on the temperature of 
the network interface. As the crashes occur when traffic is high, I 
believe that the temperature increases because processing is high.





---
- Eric Joyner

On Fri, Sep 5, 2014 at 8:39 PM, Marcelo Gondim 
wrote:


On 05/09/2014 17:17, Marcelo Gondim wrote:


On 05/09/2014 16:49, Adrian Chadd wrote:


Hi,

But is the airflow in the unit sufficient?

I had this problem at a previous job - the box was running fine, the
room was very cold, but the internal fans in the server were set to
"be very quiet". It wasn't enough to keep the ixgbe NICs happy. I had
to change the fan settings to "just always run full speed".

The fan temperature feedback loop was based on sensors on the CPU,
_not_ on the peripherals.


Hi Adrian,

Ummm. I'll check it and improve internal cooling.  :)
She is not happy and I'm also not. rsrsrsr

Cheers,


Besides the problem of heating of the network interface, I am putting some
information here. Could you tell me if there is something strange or is it
normal?

dev.ix.0.%desc: Intel(R) PRO/10GbE PCI-Express Network Driver, Version -
2.5.15
dev.ix.0.%driver: ix
dev.ix.0.%location: slot=0 function=0 handle=\_SB_.PCI1.BR48.S3F0
dev.ix.0.%pnpinfo: vendor=0x8086 device=0x154d subvendor=0x8086
subdevice=0x7b11 class=0x02
dev.ix.0.%parent: pci131
dev.ix.0.fc: 3
dev.ix.0.enable_aim: 1
dev.ix.0.advertise_speed: 0
dev.ix.0.dropped: 0
dev.ix.0.mbuf_defrag_failed: 0
dev.ix.0.watchdog_events: 0
dev.ix.0.link_irq: 121769
dev.ix.0.queue0.interrupt_rate: 5319
dev.ix.0.queue0.irqs: 7900830877
dev.ix.0.queue0.txd_head: 1037
dev.ix.0.queue0.txd_tail: 1037
dev.ix.0.queue0.tso_tx: 142
dev.ix.0.queue0.no_tx_dma_setup: 0
dev.ix.0.queue0.no_desc_avail: 0
dev.ix.0.queue0.tx_packets: 9725701450
dev.ix.0.queue0.rxd_head: 1175
dev.ix.0.queue0.rxd_tail: 1174
dev.ix.0.queue0.rx_packets: 13069276955
dev.ix.0.queue0.rx_bytes: 3391061018
dev.ix.0.queue0.rx_copies: 8574407
dev.ix.0.queue0.lro_queued: 0
dev.ix.0.queue0.lro_flushed: 0
dev.ix.0.queue1.interrupt_rate: 41666
dev.ix.0.queue1.irqs: 7681141208
dev.ix.0.queue1.txd_head: 219
dev.ix.0.queue1.txd_tail: 221
dev.ix.0.queue1.tso_tx: 57
dev.ix.0.queue1.no_tx_dma_setup: 0
dev.ix.0.queue1.no_desc_avail: 44334
dev.ix.0.queue1.tx_packets: 10196891433
dev.ix.0.queue1.rxd_head: 1988
dev.ix.0.queue1.rxd_tail: 1987
dev.ix.0.queue1.rx_packets: 13210132242
dev.ix.0.queue1.rx_bytes: 4317357059
dev.ix.0.queue1.rx_copies: 8131936
dev.ix.0.queue1.lro_queued: 0
dev.ix.0.queue1.lro_flushed: 0
dev.ix.0.queue2.interrupt_rate: 5319
dev.ix.0.queue2.irqs: 7647486080
dev.ix.0.queue2.txd_head: 761
dev.ix.0.queue2.txd_tail: 761
dev.ix.0.queue2.tso_tx: 409
dev.ix.0.queue2.no_tx_dma_setup: 0
dev.ix.0.queue2.no_desc_avail: 54207
dev.ix.0.queue2.tx_packets: 10161246425
dev.ix.0.queue2.rxd_head: 1874
dev.ix.0.queue2.rxd_tail: 1872
dev.ix.0.queue2.rx_packets: 13175551880
dev.ix.0.queue2.rx_bytes: 4472798418
dev.ix.0.queue2.rx_copies: 7488876
dev.ix.0.queue2.lro_queued: 0
dev.ix.0.queue2.lro_flushed: 0
dev.ix.0.queue3.interrupt_rate: 50
dev.ix.0.queue3.irqs: 7641129521
dev.ix.0.queue3.txd_head: 2039
dev.ix.0.queue3.txd_tail: 2039
dev.ix.0.queue3.tso_tx: 9
dev.ix.0.queue3.no_tx_dma_setup: 0
dev.ix.0.queue3.no_desc_avail: 150346
dev.ix.0.queue3.tx_packets: 10619971896
dev.ix.0.queue3.rxd_head: 1055
dev.ix.0.queue3.rxd_tail: 1054
dev.ix.0.queue3.rx_packets: 13137835529
dev.ix.0.queue3.rx_bytes: 4063197306
dev.ix.0.queue3.rx_copies: 8188713
dev.ix.0.queue3.lro_queued: 0
dev.ix.0.queue3.lro_flushed: 0
dev.ix.0.queue4.interrupt_rate: 5319
dev.ix.0.queue4.irqs: 7439824996
dev.ix.0.queue4.txd_head: 26
dev.ix.0.queue4.txd_tail: 26
dev.ix.0.queue4.tso_tx: 553912
dev.ix.0.queue4.no_tx_dma_setup: 0
dev.ix.0.queue4.no_desc_avail: 0
dev.ix.0.queue4.tx_packets: 10658683718
dev.ix.0.queue4.rxd_head: 684
dev.ix.0.queue4.rxd_tail: 681
dev.ix.0.queue4.rx_packets: 13204786830
dev.ix.0.queue4.rx_bytes: 3700845239
dev.ix.0.queue4.rx_copies: 8193379
dev.ix.0.queue4.lro_queued: 0
dev.ix.0.queue4.lro_flushed: 0
dev.ix.0.queue5.interrupt_rate: 15151
dev.ix.0.queue5.irqs: 7456613396
dev.ix.0.queue5.txd_head: 603
dev.ix.0.queue5.txd_tail: 603
dev.ix.0.queue5.tso_tx: 17
dev.ix.0.queue5.no_tx_dma_setup: 0
dev.ix.0.queue5.no_desc_avail: 0
dev.ix.0.queue5.tx_packets: 10639139790
dev.ix.0.queue5.rxd_head: 404
dev.ix.0.queue5.rxd_tail: 403
dev.ix.0.queue5.rx_packets: 13144301293
dev.ix.0.queue5.rx_bytes: 3986784766
dev.ix.0.queue5.rx_copies: 8256195
dev.ix.0.queue5.lro_queued: 0
dev.ix.0.queue5.lro_flushed: 0
dev.ix.0.queue6.interrupt_rate: 125000
dev.

Re: ixgbe CRITICAL: ECC ERROR!! Please Reboot!!

2014-09-08 Thread Eric Joyner
Getting local / remote faults is strange; are these stats from after you
rebooted?

---
- Eric Joyner

On Fri, Sep 5, 2014 at 8:39 PM, Marcelo Gondim 
wrote:

> On 05/09/2014 17:17, Marcelo Gondim wrote:
>
>> On 05/09/2014 16:49, Adrian Chadd wrote:
>>
>>> Hi,
>>>
>>> But is the airflow in the unit sufficient?
>>>
>>> I had this problem at a previous job - the box was running fine, the
>>> room was very cold, but the internal fans in the server were set to
>>> "be very quiet". It wasn't enough to keep the ixgbe NICs happy. I had
>>> to change the fan settings to "just always run full speed".
>>>
>>> The fan temperature feedback loop was based on sensors on the CPU,
>>> _not_ on the peripherals.
>>>
>> Hi Adrian,
>>
>> Ummm. I'll check it and improve internal cooling.  :)
>> She is not happy and I'm also not. rsrsrsr
>>
>> Cheers,
>>
>
> Besides the problem of heating of the network interface, I am putting some
> information here. Could you tell me if there is something strange or is it
> normal?
>
> dev.ix.0.%desc: Intel(R) PRO/10GbE PCI-Express Network Driver, Version -
> 2.5.15
> dev.ix.0.%driver: ix
> dev.ix.0.%location: slot=0 function=0 handle=\_SB_.PCI1.BR48.S3F0
> dev.ix.0.%pnpinfo: vendor=0x8086 device=0x154d subvendor=0x8086
> subdevice=0x7b11 class=0x02
> dev.ix.0.%parent: pci131
> dev.ix.0.fc: 3
> dev.ix.0.enable_aim: 1
> dev.ix.0.advertise_speed: 0
> dev.ix.0.dropped: 0
> dev.ix.0.mbuf_defrag_failed: 0
> dev.ix.0.watchdog_events: 0
> dev.ix.0.link_irq: 121769
> dev.ix.0.queue0.interrupt_rate: 5319
> dev.ix.0.queue0.irqs: 7900830877
> dev.ix.0.queue0.txd_head: 1037
> dev.ix.0.queue0.txd_tail: 1037
> dev.ix.0.queue0.tso_tx: 142
> dev.ix.0.queue0.no_tx_dma_setup: 0
> dev.ix.0.queue0.no_desc_avail: 0
> dev.ix.0.queue0.tx_packets: 9725701450
> dev.ix.0.queue0.rxd_head: 1175
> dev.ix.0.queue0.rxd_tail: 1174
> dev.ix.0.queue0.rx_packets: 13069276955
> dev.ix.0.queue0.rx_bytes: 3391061018
> dev.ix.0.queue0.rx_copies: 8574407
> dev.ix.0.queue0.lro_queued: 0
> dev.ix.0.queue0.lro_flushed: 0
> dev.ix.0.queue1.interrupt_rate: 41666
> dev.ix.0.queue1.irqs: 7681141208
> dev.ix.0.queue1.txd_head: 219
> dev.ix.0.queue1.txd_tail: 221
> dev.ix.0.queue1.tso_tx: 57
> dev.ix.0.queue1.no_tx_dma_setup: 0
> dev.ix.0.queue1.no_desc_avail: 44334
> dev.ix.0.queue1.tx_packets: 10196891433
> dev.ix.0.queue1.rxd_head: 1988
> dev.ix.0.queue1.rxd_tail: 1987
> dev.ix.0.queue1.rx_packets: 13210132242
> dev.ix.0.queue1.rx_bytes: 4317357059
> dev.ix.0.queue1.rx_copies: 8131936
> dev.ix.0.queue1.lro_queued: 0
> dev.ix.0.queue1.lro_flushed: 0
> dev.ix.0.queue2.interrupt_rate: 5319
> dev.ix.0.queue2.irqs: 7647486080
> dev.ix.0.queue2.txd_head: 761
> dev.ix.0.queue2.txd_tail: 761
> dev.ix.0.queue2.tso_tx: 409
> dev.ix.0.queue2.no_tx_dma_setup: 0
> dev.ix.0.queue2.no_desc_avail: 54207
> dev.ix.0.queue2.tx_packets: 10161246425
> dev.ix.0.queue2.rxd_head: 1874
> dev.ix.0.queue2.rxd_tail: 1872
> dev.ix.0.queue2.rx_packets: 13175551880
> dev.ix.0.queue2.rx_bytes: 4472798418
> dev.ix.0.queue2.rx_copies: 7488876
> dev.ix.0.queue2.lro_queued: 0
> dev.ix.0.queue2.lro_flushed: 0
> dev.ix.0.queue3.interrupt_rate: 50
> dev.ix.0.queue3.irqs: 7641129521
> dev.ix.0.queue3.txd_head: 2039
> dev.ix.0.queue3.txd_tail: 2039
> dev.ix.0.queue3.tso_tx: 9
> dev.ix.0.queue3.no_tx_dma_setup: 0
> dev.ix.0.queue3.no_desc_avail: 150346
> dev.ix.0.queue3.tx_packets: 10619971896
> dev.ix.0.queue3.rxd_head: 1055
> dev.ix.0.queue3.rxd_tail: 1054
> dev.ix.0.queue3.rx_packets: 13137835529
> dev.ix.0.queue3.rx_bytes: 4063197306
> dev.ix.0.queue3.rx_copies: 8188713
> dev.ix.0.queue3.lro_queued: 0
> dev.ix.0.queue3.lro_flushed: 0
> dev.ix.0.queue4.interrupt_rate: 5319
> dev.ix.0.queue4.irqs: 7439824996
> dev.ix.0.queue4.txd_head: 26
> dev.ix.0.queue4.txd_tail: 26
> dev.ix.0.queue4.tso_tx: 553912
> dev.ix.0.queue4.no_tx_dma_setup: 0
> dev.ix.0.queue4.no_desc_avail: 0
> dev.ix.0.queue4.tx_packets: 10658683718
> dev.ix.0.queue4.rxd_head: 684
> dev.ix.0.queue4.rxd_tail: 681
> dev.ix.0.queue4.rx_packets: 13204786830
> dev.ix.0.queue4.rx_bytes: 3700845239
> dev.ix.0.queue4.rx_copies: 8193379
> dev.ix.0.queue4.lro_queued: 0
> dev.ix.0.queue4.lro_flushed: 0
> dev.ix.0.queue5.interrupt_rate: 15151
> dev.ix.0.queue5.irqs: 7456613396
> dev.ix.0.queue5.txd_head: 603
> dev.ix.0.queue5.txd_tail: 603
> dev.ix.0.queue5.tso_tx: 17
> dev.ix.0.queue5.no_tx_dma_setup: 0
> dev.ix.0.queue5.no_desc_avail: 0
> dev.ix.0.queue5.tx_packets: 10639139790
> dev.ix.0.queue5.rxd_head: 404
> dev.ix.0.queue5.rxd_tail: 403
> dev.ix.0.queue5.rx_packets: 13144301293
> dev.ix.0.queue5.rx_bytes: 3986784766
> dev.ix.0.queue5.rx_copies: 8256195
> dev.ix.0.queue5.lro_queued: 0
> dev.ix.0.queue5.lro_flushed: 0
> dev.ix.0.queue6.interrupt_rate: 125000
> dev.ix.0.queue6.irqs: 7466940576
> dev.ix.0.queue6.txd_head: 1784
> dev.ix.0.queue6.txd_tail: 1784
> dev.ix.0.queue6.tso_tx: 2001
> dev.ix.0.queue6.no_tx_dma_setup: 0
> dev.ix.0.queue6.no_desc_avail: 0
> dev.ix.0.queue6.tx_packets: 9784312967
> dev.ix.0.que

RE: How can sshuttle be used properly with FreeBSD (and with DNS) ?

2014-09-08 Thread John Case


Hi Ryan,

Thanks for responding.

Just for the record, I removed my natd and ipdivert lines, so that 
sshuttles divert rules were the only rules on the system ... I made my 
system work without my own natd/divert by putting some static route 
definitions into rc.conf.


Anyway, it still worked fine for tcp over the ssh tunnel, but it didn't 
help the UDP tunneling, which supports your conclusion.


What is the solution here ?  Or more importantly, what is even the 
problem?  sshuttle documentation (the readme) makes some vague references 
to FreeBSD not handling forwarding of UDP properly, which is why the 
diverts for it go into place at all ...


Do we solve this problem by fixing sshuttle (perhaps putting in more 
complex ipfw rules for it to inject) ?  Or do we solve this problem by 
fixing FreeBSD, and making forwarding "work" with UDP properly ?


It doesn't work at all now, but I'd like to at least get a sense as to 
what the real problem to solve here is ...


Thanks.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: help porting netmap to new driver

2014-09-08 Thread David
Hi

Sorry for the late response, I'll be using an ARM processor for a 1G link.

My first issue came when doing what you suggested, and try the "emulated
netmap mode". I need to cross compile the code from my x86, which I think I
did correctly by setting up my arm toolchain and passing the kernel source
when doing the "make".

Now when I run the pkt-gen example I get :

[   57.602575] Unable to handle kernel paging request at virtual address
f143ca04
[   57.609740] pgd = 894c4000
[   57.612420] [f143ca04] *pgd=
[   57.615973] Internal error: Oops: 5 [#1] PREEMPT SMP ARM

Where I'm stuck since a while ago trying to figure it out.

regards

2014-08-27 16:23 GMT-06:00 Luigi Rizzo :

>
>
>
> On Wed, Aug 27, 2014 at 3:18 PM, David  wrote:
>
>> Hi,
>>
>> I'm needing to use netmap on a custom driver, I don't understand the
>> content of the functions I need to implement and that are detailed on
>> "PORTING" file.
>>
>>
> ​sometimes (often, actually) the hw has bottlenecks that make native
> netmap mode almost useless.
> ​One thing you could try is the "emulated netmap mode" which
> is used by default​ if the driver has no native support.
> This is in the code on code.google.com/p/netmap/ ,
> the branch "next" (which is basically the code now in FreeBSD)
> has some fixes for that specific feature
>
> cheers
> luigi
>
>
> can someone give a hand to understand it better?
>>
>> regards
>>
>> --
>> David Díaz Barquero
>>
>> Ingeniería en Computadores
>> Tecnológico de Costa Rica
>> ___
>> freebsd-net@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-net
>> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
>
>
>
>
> --
> -+---
>  Prof. Luigi RIZZO, ri...@iet.unipi.it  . Dip. di Ing. dell'Informazione
>  http://www.iet.unipi.it/~luigi/. Universita` di Pisa
>  TEL  +39-050-2211611   . via Diotisalvi 2
>  Mobile   +39-338-6809875   . 56122 PISA (Italy)
> -+---
>



-- 
David Díaz Barquero

Ingeniería en Computadores
Tecnológico de Costa Rica
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Re: [RFC] Patch to improve TSO limitation formula in general

2014-09-08 Thread Rick Macklem
Hans Petter Selasky wrote:
> On 09/06/14 00:09, Rick Macklem wrote:
> > Hans Petter Selesky wrote:
> >> On 09/05/14 23:19, Eric Joyner wrote:
> >>> There are some concerns if we use this with devices that ixl
> >>> supports:
> >>>
> >>> - The maximum fragment size is 16KB-1, which isn't a power of 2.
> >>>
> >>
> >> Hi Eric,
> >>
> >> Multiplying by powers of two are more fast, than non-powers of
> >> two.
> >> So
> >> in this case you would have to use 8KB as a maximum.
> >>
> > Well, I'm no architecture expert, but I really doubt the CPU delay
> > of a
> > non-power of 2 multiply/divide is significant related to doing
> > smaller
> > TSO segments. Long ago (as in 1970s) I did work on machines where
> > shifts
> > for power of 2 multiply/divide was preferable, but these days I
> > doubt it
> > is going to matter??
> >
> >>> - You can't get the maximum TSO size for ixl devices by
> >>> multiplying
> >>> the
> >>> maximum number of fragments by the maximum size.
> >>> Instead the number of fragments is AFAIK unlimited, but a segment
> >>> can only
> >>> span 8 mbufs (including the [up to 3] mbufs containing the
> >>> header),
> >>> and the
> >>> maximum TSO size is 256KB.
> 
> Hi,
> 
> Maybe that can be a separate parameter?
> 
> I see that your patch assumes that a segment can be any-length. That
> is
> not always the case. Remember there are JUMBO mbufs too.
> 
I thought JUMBO mbufs were only going to be used on the receive side,
however I suppose if a packet is received into a JUMBO mbuf and then
forwarded on another interface...

> With my patch, the maximum segment size is a separate parameter. The
> total number of TSO bytes is then not so useful.
> 
Well, if you are referring to if_hw_tsomax, I'm not sure it was the
best plan to begin with. It was implemented for xen and I'm not sure
that any other driver uses it as of now.

However...
I'm not a network/hardware guy, but it seems some devices do have
the IP_MAXPACKET limit (use the ip_len field in the ip header to
know how large the TSO segment is). There is also at least one device
(82598 chip for "ix" driver) that can handle more that IP_MAXPACKET,
so it seems appropriate to have a value that the driver can set.

Since the maximum size of the gather list for transmit does seem to
vary a lot between devices, with several handling less than 35, it
does seem appropriate to allow drivers to specify that.

If devices can't handle a single gather entry over a certain size,
I think that does need to be specified along with the max size of
the gather list, since the driver will need to use multiple gather
entries for a single mbuf and tcp_output() should take that into
account.

In summary, yep, I basically agree.

rick
ps: Who will look at your patch soon.

> --HPS
> 
> ___
> freebsd-curr...@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to
> "freebsd-current-unsubscr...@freebsd.org"
> 
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bugzilla] Commit Needs MFC

2014-09-08 Thread bugzilla-noreply
Hi,

You have a bug in the "Needs MFC" state which has not been touched in 7 or more 
days. This email serves as a reminder that you may want to MFC this bug or 
marked it as completed.

In the event you have a longer MFC timeout you may update this bug with a 
comment and I won't remind you again for 7 days.

This reminder is only sent on Mondays.  Please file a bug about concerns you 
may have.

  This search was scheduled by ead...@freebsd.org.


 (1 bugs)

Bug 183659:
  https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=183659
Severity: Affects Only Me
Priority: Normal
Hardware: Any
Assignee: freebsd-net@FreeBSD.org
  Status: Needs MFC
  Resolution: 
 Summary: [tcp] TCP stack lock contention with short-lived connections

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"