em2: watchdog timeout - resetting
Hello, I have a server pfsense in bridge mode to function as transparent FW, the problem is that once I connect the pfsense between my router core and my core switch catalyst a few seconds begin to appear several messages like these: em2: watchdog timeout - resetting em2: watchdog timeout - resetting em2: watchdog timeout - resetting em2: watchdog timeout - resetting And the connection falls back repeatedly and indefinitely. Searching on google did some tweaks: hw.em.rxd = "4096" hw.em.txd = "4096" hw.em.tx_int_delay = "250" hw.em.rx_int_delay = "250" hw.em.tx_abs_int_delay = "250" hw.em.rx_abs_int_delay = "250" hw.em.enable_msix = "0" hw.em.msix_queues = "2" hw.em.rx_process_limit = "-1" hw.em.fc_setting = "0" hw.em.num_queues = 1 I tried several variations, but also without success. My infrastructure: (7301 cisco router) -- em1 (pfsense bridge0) em2 -- sw core cisco catalyst I tried also a fresh installation of pfsense 2.0.3 without success. My network card is an Intel ® PRO/1000 MT Dual Port Server Adapter PCI-X on a 32-bit PCI slot, operating bridge in 1 <-> em2. Motherboard ASUS P8H61-V and CPU corei5, I tried disabling all onboard devices, also without success. I have 4 nics identical, tested one per one all without success also. I am using the latest bios from ASUS site, I tried various combinations of bios, also without success. I tried to activate the device polling unsuccessfully to solve the problem. I tried disable acpi on boot menu but the ser without acpi wont boot. I tried disable TOE, FLOW CONTROL, no sucess. This is a link to a troughput 60 Mbits with multiple VLANs (more than 200) Wisp provider. I tested the same server same hardware but using linux (debian 6) did the bridges, and everything worked properly!, so it is clear that this is a problem software (in intel em driver?) Tested on another server (P5VD2MX + Core2Duo) but with the same NIC, and the problem occurs in the same way. I realized that it generates a huge number of interruptions em2 over 200 thousand. #vmstat -i interrupt total rate irq16: em1 12891 40 irq17: em2546630 1713 irq19: atapci0 5181 16 irq23: ehci0 ehci1 1049 3 cpu0: timer 637069 1997 irq256: em0 1557 4 cpu3: timer 636939 1996 cpu2: timer 636939 1996 cpu1: timer 636938 1996 Total3115193 9765 Data for debug: [2.0.3-RELEASE] [root@pfsense.localdomain] / root (1): sysctl hw.em hw.em.eee_setting: 0 hw.em.rx_process_limit: 100 hw.em.enable_msix: 0 hw.em.sbp: 0 hw.em.smart_pwr_down: 0 hw.em.txd: 4096 hw.em.rxd: 4096 hw.em.rx_abs_int_delay: 66 hw.em.tx_abs_int_delay: 66 hw.em.rx_int_delay: 0 hw.em.tx_int_delay: 0 [2.0.3-RELEASE] [root@pfsense.localdomain] / root (3): sysctl dev.em.2 dev.em.2.% desc: Intel (R) PRO/1000 Legacy Network Connection 1.0.4 dev.em.2.% driver: in dev.em.2.% location: slot = 0 function = 1 dev.em.2.% pnpinfo: vendor = 0x8086 device = 0x1079 subvendor = 0x8086 subdevice = 0x1179 class = 0x02 dev.em.2.% parent: PCI4 dev.em.2.nvm: -1 dev.em.2.rx_int_delay: 0 dev.em.2.tx_int_delay: 0 dev.em.2.rx_abs_int_delay: 66 dev.em.2.tx_abs_int_delay: 66 dev.em.2.rx_processing_limit: 100 dev.em.2.flow_control: 0 dev.em.2.mbuf_alloc_fail: 0 dev.em.2.cluster_alloc_fail: 0 dev.em.2.dropped: 0 dev.em.2.tx_dma_fail: 0 dev.em.2.tx_desc_fail1: 0 dev.em.2.tx_desc_fail2: 0 dev.em.2.rx_overruns: 0 dev.em.2.watchdog_timeouts: 0 dev.em.2.device_control: 1076888137 dev.em.2.rx_control: 32794 dev.em.2.fc_high_water: 47104 dev.em.2.fc_low_water: 45604 dev.em.2.fifo_workaround: 0 dev.em.2.fifo_reset: 0 dev.em.2.txd_head: 243 dev.em.2.txd_tail: 243 dev.em.2.rxd_head: 1374 dev.em.2.rxd_tail: 1373 dev.em.2.mac_stats.excess_coll: 0 dev.em.2.mac_stats.single_coll: 0 dev.em.2.mac_stats.multiple_coll: 0 dev.em.2.mac_stats.late_coll: 0 dev.em.2.mac_stats.collision_count: 0 dev.em.2.mac_stats.symbol_errors: 0 dev.em.2.mac_stats.sequence_errors: 0 dev.em.2.mac_stats.defer_count: 0 dev.em.2.mac_stats.missed_packets: 0 dev.em.2.mac_stats.recv_no_buff: 0 dev.em.2.mac_stats.recv_undersize: 0 dev.em.2.mac_stats.recv_fragmented: 0 dev.em.2.mac_stats.recv_oversize: 0 dev.em.2.mac_stats.recv_jabber: 0 dev.em.2.mac_stats.recv_errs: 0 dev.em.2.mac_stats.crc_errs: 0 dev.em.2.mac_stats.alignment_errs: 0 dev.em.2.mac_stats.coll_ext_errs: 0 dev.em.2.mac_stats.xon_recvd: 0 dev.em.2.mac_stats.xon_txd: 0 dev.em.2.mac_stats.xoff_recvd: 0 dev.em.2.mac_stats.xoff_txd: 0 dev.em.2.mac_stats.total_pkts_recvd: 6681156 dev.em.2.mac_stats.good_pkts_recvd: 6681156 dev.em.2.mac_stats.bcast_pkts_recvd: 17313 dev.em.2.mac_stats.mcast_pkts_recvd: 156511 dev.em.2.mac_stats.rx_frames_64: 1199707 dev.em.2.mac_stats.rx_frames_65_127: 2110104 dev.em.2.mac_stats.rx_frames_128_255
Re: netmap bridge can tranmit big packet in line rate ?
--- On Tue, 5/21/13, Luigi Rizzo wrote: > From: Luigi Rizzo > Subject: Re: netmap bridge can tranmit big packet in line rate ? > To: "Hooman Fazaeli" > Cc: freebsd-net@freebsd.org > Date: Tuesday, May 21, 2013, 10:39 AM > On Tue, May 21, 2013 at 06:51:12PM > +0430, Hooman Fazaeli wrote: > > On 5/21/2013 5:10 PM, Barney Cordoba wrote: > > > > > > --- On Tue, 5/21/13, liujie > wrote: > > > > > >> From: liujie > > >> Subject: Re: netmap bridge can tranmit big > packet in line rate ? > > >> To: freebsd-net@freebsd.org > > >> Date: Tuesday, May 21, 2013, 5:25 AM > > >> Hi, Prof.Luigi RIZZO > > >> > > >> Firstly i should thank you for netmap. I > tried to send a > > >> e-mail to you > > >> yestoday, but it was rejected. > > >> > > >> I used two machines to test netmap > bridge. all with i7-2600 > > >> cpu and intel > > >> 82599 dual-interfaces card. > > >> > > >> One worked as sender and receiver with > pkt-gen, the other > > >> worked as bridge > > >> with bridge.c. > > >> > > >> as you said,I feeled comfous too when i > saw the big packet > > >> performance > > >> dropped, i tried to change the memory > parameters of > > >> netmap(netmap_mem1.c > > >> netmap_mem2.c),but it seemed that can > not resove the > > >> problem. > > >> 60-byte packet send 14882289 > pps recv > > >> 13994753 pps > > >> 124-byte > > > >> send 8445770 pps > > > >> recv 7628942 pps > > >> 252-byte > > > >> send 4529819 pps > > > >> recv 3757843 pps > > >> 508-byte > > > >> send 2350815 pps > > >> recv 1645647 pps > > >> 1514-byte > send > > >> 814288 pps > recv 489133 > > >> pps > > > These numbers indicate you're tx'ing 7.2Gb/s with > 60 byte packets and > > > 9.8Gb/s with 1514, so maybe you just need a new > calculator? > > > > > > BC > > > ___ > > > > > AsBarney pointed outalready, your numbers are > reasonable. You have almost saturated > > the link with 1514 byte packets.In the case of 64 byte > packets, you do not achieve line > > rate probably because of the congestion on the bus.Can > you show us "top -SI" output on the > > sender machine? > > the OP is commenting that on the receive side he is seeing a > much > lower number than on the tx side (A:ix1 489Kpps vs A:ix0 > 814Kpps). > > [pkt-gen -f tx ix0]-->--[ix0 bridge ] > [ HOST A > ] [ HOST B ] > [pkt-gen -f rx ix1]--<--[ix1 > ] > > What is unclear is where the loss occurs. > > cheers > luigi The ixgbe driver has mac stats that will answer that. Just look at the sysctl output. BC ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: netmap bridge can tranmit big packet in line rate ?
On Tue, May 21, 2013 at 06:51:12PM +0430, Hooman Fazaeli wrote: > On 5/21/2013 5:10 PM, Barney Cordoba wrote: > > > > --- On Tue, 5/21/13, liujie wrote: > > > >> From: liujie > >> Subject: Re: netmap bridge can tranmit big packet in line rate ? > >> To: freebsd-net@freebsd.org > >> Date: Tuesday, May 21, 2013, 5:25 AM > >> Hi, Prof.Luigi RIZZO > >> > >> Firstly i should thank you for netmap. I tried to send a > >> e-mail to you > >> yestoday, but it was rejected. > >> > >> I used two machines to test netmap bridge. all with i7-2600 > >> cpu and intel > >> 82599 dual-interfaces card. > >> > >> One worked as sender and receiver with pkt-gen, the other > >> worked as bridge > >> with bridge.c. > >> > >> as you said,I feeled comfous too when i saw the big packet > >> performance > >> dropped, i tried to change the memory parameters of > >> netmap(netmap_mem1.c > >> netmap_mem2.c),but it seemed that can not resove the > >> problem. > >> 60-byte packet send 14882289 pps recv > >> 13994753 pps > >> 124-byte > >>send 8445770 pps > >> recv7628942 pps > >> 252-byte > >>send 4529819 pps > >> recv 3757843 pps > >> 508-byte > >>send2350815 pps > >> recv1645647 pps > >> 1514-byte send > >> 814288 pps recv 489133 > >> pps > > These numbers indicate you're tx'ing 7.2Gb/s with 60 byte packets and > > 9.8Gb/s with 1514, so maybe you just need a new calculator? > > > > BC > > ___ > > > AsBarney pointed outalready, your numbers are reasonable. You have almost > saturated > the link with 1514 byte packets.In the case of 64 byte packets, you do not > achieve line > rate probably because of the congestion on the bus.Can you show us "top -SI" > output on the > sender machine? the OP is commenting that on the receive side he is seeing a much lower number than on the tx side (A:ix1 489Kpps vs A:ix0 814Kpps). [pkt-gen -f tx ix0]-->--[ix0 bridge ] [ HOST A] [HOST B ] [pkt-gen -f rx ix1]--<--[ix1] What is unclear is where the loss occurs. cheers luigi ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: netmap bridge can tranmit big packet in line rate ?
On 21.05.2013 16:21, Hooman Fazaeli wrote: AsBarney pointed outalready, your numbers are reasonable. You have almost saturated the link with 1514 byte packets.In the case of 64 byte packets, you do not achieve line rate probably because of the congestion on the bus.Can you show us "top -SI" output on the sender machine? Be aware that "line rate" for small packets is NOT raw link speed divided by packet size. There's also pre- and post-amble bits and inter-frame gap to be considered. Those bits are on the wire too but invisible as they are handled entirely by the ethernet NIC. The minimum size of an ethernet frame is 64 bytes (excluding the additional bits, 84 bytes including them) even though IP packets can be smaller. The difference is padded by the NIC. So the maximum is 14,880,960 pps at 64 bytes and 812,740 at 1500 bytes. There's a number of resources explaining this issue in more detail: http://www.cisco.com/web/about/security/intelligence/network_performance_metrics.html http://ekb.spirent.com/resources/sites/SPIRENT/content/live/FAQS/1/FAQ10597/en_US/How_to_Test_10G_Ethernet_WhitePaper_RevB.PDF -- Andre ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: netmap bridge can tranmit big packet in line rate ?
On 5/21/2013 5:10 PM, Barney Cordoba wrote: > > --- On Tue, 5/21/13, liujie wrote: > >> From: liujie >> Subject: Re: netmap bridge can tranmit big packet in line rate ? >> To: freebsd-net@freebsd.org >> Date: Tuesday, May 21, 2013, 5:25 AM >> Hi, Prof.Luigi RIZZO >> >> Firstly i should thank you for netmap. I tried to send a >> e-mail to you >> yestoday, but it was rejected. >> >> I used two machines to test netmap bridge. all with i7-2600 >> cpu and intel >> 82599 dual-interfaces card. >> >> One worked as sender and receiver with pkt-gen, the other >> worked as bridge >> with bridge.c. >> >> as you said,I feeled comfous too when i saw the big packet >> performance >> dropped, i tried to change the memory parameters of >> netmap(netmap_mem1.c >> netmap_mem2.c),but it seemed that can not resove the >> problem. >> 60-byte packet send 14882289 pps recv >> 13994753 pps >> 124-byte >>send 8445770 pps >> recv7628942 pps >> 252-byte >>send 4529819 pps >> recv 3757843 pps >> 508-byte >>send2350815 pps >> recv1645647 pps >> 1514-byte send >> 814288 pps recv 489133 >> pps > These numbers indicate you're tx'ing 7.2Gb/s with 60 byte packets and > 9.8Gb/s with 1514, so maybe you just need a new calculator? > > BC > ___ > AsBarney pointed outalready, your numbers are reasonable. You have almost saturated the link with 1514 byte packets.In the case of 64 byte packets, you do not achieve line rate probably because of the congestion on the bus.Can you show us "top -SI" output on the sender machine? -- Best regards. Hooman Fazaeli ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: netmap bridge can tranmit big packet in line rate ?
--- On Tue, 5/21/13, liujie wrote: > From: liujie > Subject: Re: netmap bridge can tranmit big packet in line rate ? > To: freebsd-net@freebsd.org > Date: Tuesday, May 21, 2013, 5:25 AM > Hi, Prof.Luigi RIZZO > > Firstly i should thank you for netmap. I tried to send a > e-mail to you > yestoday, but it was rejected. > > I used two machines to test netmap bridge. all with i7-2600 > cpu and intel > 82599 dual-interfaces card. > > One worked as sender and receiver with pkt-gen, the other > worked as bridge > with bridge.c. > > as you said,I feeled comfous too when i saw the big packet > performance > dropped, i tried to change the memory parameters of > netmap(netmap_mem1.c > netmap_mem2.c),but it seemed that can not resove the > problem. > 60-byte packet send 14882289 pps recv > 13994753 pps > 124-byte > send 8445770 pps > recv 7628942 pps > 252-byte > send 4529819 pps > recv 3757843 pps > 508-byte > send 2350815 pps > recv 1645647 pps > 1514-byte send > 814288 pps recv 489133 > pps These numbers indicate you're tx'ing 7.2Gb/s with 60 byte packets and 9.8Gb/s with 1514, so maybe you just need a new calculator? BC ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: netmap bridge can tranmit big packet in line rate ?
On Tue, May 21, 2013 at 04:30:02AM -0700, liujie wrote: > Thank marko. > > My machine mainboard chipset is intel c206, and network is dual-port intel > x520 card. while you run your tests, try to instrument bridge.c and see how many pps it actually receives and transmits. As Marko said, there might be congestion on the PCIe bus, but i would expect a lot more of it with small packet sizes than large ones. cheers luigi > I'll find other machine to test once more. > > > > -- > View this message in context: > http://freebsd.1045724.n5.nabble.com/netmap-bridge-can-tranmit-big-packet-in-line-rate-tp5813346p5813661.html > Sent from the freebsd-net mailing list archive at Nabble.com. > ___ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org" ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: netmap bridge can tranmit big packet in line rate ?
Thank marko. My machine mainboard chipset is intel c206, and network is dual-port intel x520 card. I'll find other machine to test once more. -- View this message in context: http://freebsd.1045724.n5.nabble.com/netmap-bridge-can-tranmit-big-packet-in-line-rate-tp5813346p5813661.html Sent from the freebsd-net mailing list archive at Nabble.com. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: netmap bridge can tranmit big packet in line rate ?
On Tuesday 21 May 2013 11:25:16 liujie wrote: > Hi, Prof.Luigi RIZZO > > Firstly i should thank you for netmap. I tried to send a e-mail to you > yestoday, but it was rejected. > > I used two machines to test netmap bridge. all with i7-2600 cpu and > intel 82599 dual-interfaces card. > > One worked as sender and receiver with pkt-gen, the other worked as > bridge with bridge.c. > > as you said,I feeled comfous too when i saw the big packet performance > dropped, i tried to change the memory parameters of netmap(netmap_mem1.c > netmap_mem2.c),but it seemed that can not resove the problem. > 60-byte packet send 14882289 pps recv 13994753 pps > 124-byte send 8445770 pps recv7628942 pps > 252-byte send 4529819 pps recv 3757843 pps > 508-byte send2350815 pps recv1645647 pps > 1514-byte send814288 pps recv 489133 pps > > sender command: pkt-gen -i ix0 -t 5 -l 60 > receiver command: pkt-gen -i ix1 -r 5 > bridge(other machine) command:bridge -i ix0 -i ix1 > > can sender and receiver on a same machine ? Most likely the PCIe path between the dual-ported card and the CPU is the bottleneck. Depending on the chipset and motherboard design, it may be that your card is only using 4 instead of 8 PCIe lanes, because some of the lanes may be "shared" with another PCIe slot, such as the one in which a graphic card is plugged in. You can also try experimenting with slightly overclocking the PCIe bus if the BIOS permits that - I had no problems overclocking the ixgbe and an AMD PCIe chipset by 25%. Marko ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
ALTQ + em or ixgbe does not works.
Hello All, As I do believe more people has the same problem, I'm wondering if there is anyone taking a looking on this problem that is pretty much serious. ALTQ does not works with any ixgbe, em, gbe drivers on FreeBSD 9.1-RELEASE. I saw Glebius also mention about this problem while ago. Any solution? Best Regards, -- Marcelo Araujo ara...@freebsd.org ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: netmap bridge can tranmit big packet in line rate ?
Hi, Prof.Luigi RIZZO Firstly i should thank you for netmap. I tried to send a e-mail to you yestoday, but it was rejected. I used two machines to test netmap bridge. all with i7-2600 cpu and intel 82599 dual-interfaces card. One worked as sender and receiver with pkt-gen, the other worked as bridge with bridge.c. as you said,I feeled comfous too when i saw the big packet performance dropped, i tried to change the memory parameters of netmap(netmap_mem1.c netmap_mem2.c),but it seemed that can not resove the problem. 60-byte packet send 14882289 pps recv 13994753 pps 124-byte send 8445770 pps recv7628942 pps 252-byte send 4529819 pps recv 3757843 pps 508-byte send2350815 pps recv1645647 pps 1514-byte send814288 pps recv 489133 pps sender command: pkt-gen -i ix0 -t 5 -l 60 receiver command: pkt-gen -i ix1 -r 5 bridge(other machine) command:bridge -i ix0 -i ix1 can sender and receiver on a same machine ? thank you for your reply. -- View this message in context: http://freebsd.1045724.n5.nabble.com/netmap-bridge-can-tranmit-big-packet-in-line-rate-tp5813346p5813643.html Sent from the freebsd-net mailing list archive at Nabble.com. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: netmap bridge can tranmit big packet in line rate ?
On Tue, May 21, 2013 at 6:26 AM, liujie wrote: > My hardware setup is: >CPU: i7 2600,MEM:16G NETWORK: intel 82599 OS:freebsd 9.1 > when the packet size increases,the transmit rate drops. 1518-byte packet > can > only transmit about 80% > you should really tell us what numbers you see with various packet sizes, and whether you measure on the sender or on the receiver or on the bridge (you say "using netmap bridge to transmit..." which makes it unclear where you are making the measurements, whether you are using two machines or one, etc. 60-byte packets is the most challenging configuration for netmap, if you get line rate there then there is no reason not to go as fast with larger packets. >can you tell me your hardware setup ? >do you have any modification to bridge.c and netmap ? > nothing different from the ones in the FreeBSD tree, and we used it on a variety of different machines including test an i7-820 i think cheers luigi > >thanks for your reply. > > > > > -- > View this message in context: > http://freebsd.1045724.n5.nabble.com/netmap-bridge-can-tranmit-big-packet-in-line-rate-tp5813346p5813589.html > Sent from the freebsd-net mailing list archive at Nabble.com. > ___ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org" > -- -+--- Prof. Luigi RIZZO, ri...@iet.unipi.it . Dip. di Ing. dell'Informazione http://www.iet.unipi.it/~luigi/. Universita` di Pisa TEL +39-050-2211611 . via Diotisalvi 2 Mobile +39-338-6809875 . 56122 PISA (Italy) -+--- ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"