Re: Dual-rate transceivers with ixgbe?
Hmmm, this is odd, I'm sure that was tested by my validation engineer. Tell me what the hardware looks like, ie what the 1G link partner is and I'll have him check into it... it SHOULD work. You could just ask me you know :) Jack On Tue, May 18, 2010 at 10:39 PM, Juli Mallett jmall...@freebsd.org wrote: Hey all, Has anyone out there been able to get link using a dual-rate transceiver at 1gig with the ixgbe driver in FreeBSD? I have an SFP+ NIC and an Intel-branded dual-rate transceiver but it will only get link at 10gig. I used the latest driver from FreeBSD trunk backported (which IIRC didn't take any significant changes; jfv@ does a good job of keeping the drivers working across branches) to 7.x for my testing. A similarly-versioned Linux driver (with only trivial and mostly cosmetic differences in the hardware code outside of the main driver) worked at 1gig just fine. All testing was done with switches rather than host-to-host connections. It seems like someone else must have tried this and it seems like there must just be some trivial difference in card initialization between FreeBSD and Linux, so I thought I'd ask publicly to see if anyone had patches. I instrumented the code extensively but so far have come up empty-handed. Thanks, Juli. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: kern/146719: [pf] [panic] PF or dumynet kernel panic
Old Synopsis: PF or dumynet kernel panic New Synopsis: [pf] [panic] PF or dumynet kernel panic Responsible-Changed-From-To: freebsd-bugs-freebsd-net Responsible-Changed-By: linimon Responsible-Changed-When: Wed May 19 09:03:36 UTC 2010 Responsible-Changed-Why: http://www.freebsd.org/cgi/query-pr.cgi?pr=146719 ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: Crashes with ixgbe on FreeBSD 8
On Tue, May 18, 2010 at 22:28 -0700, you wrote: Yes, download the latest code, oh hmmm, can you install STABLE, because there's one small issue that will cause a problem on 8 REL, but if that's a big problem I can also give you a patch so it will work. Jack, thanks for the reply. I could upgrade to STABLE but I as I have quite a number of machines in my setup (and not not all of them need the driver) I would prefer to stay with RELEASE if that is an option. So if you could send me a patch, that would be very helpful. What I'd like to see you try is the newest code that I checked into HEAD today, that's what I am planning for 8.1 and I want as much testing as I can get anyway. Sure, I'd be happy to try that. Can this be back-ported to RELEASE as well or does it need HEAD? Thanks! Robin -- Robin Sommer * Phone +1 (510) 666-2886 * ro...@icir.org ICSI/LBNL* Fax +1 (510) 666-2956 * www.icir.org ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: Crashes with ixgbe on FreeBSD 8
It will work on 8 RELEASE just fine except for a macro define that ALTQ required that's not in RELEASE, what I did was stick the define into the header and all was well. I was thinking that I would not check that change in but I now see it will probably be useful to have it in the mainstream. I have another hot issue this morning but I will try and check the change into HEAD this afternoon then you will able to just pull that and the driver will drop into 8 REL with no problem. Jack On Wed, May 19, 2010 at 9:20 AM, Robin Sommer ro...@icir.org wrote: On Tue, May 18, 2010 at 22:28 -0700, you wrote: Yes, download the latest code, oh hmmm, can you install STABLE, because there's one small issue that will cause a problem on 8 REL, but if that's a big problem I can also give you a patch so it will work. Jack, thanks for the reply. I could upgrade to STABLE but I as I have quite a number of machines in my setup (and not not all of them need the driver) I would prefer to stay with RELEASE if that is an option. So if you could send me a patch, that would be very helpful. What I'd like to see you try is the newest code that I checked into HEAD today, that's what I am planning for 8.1 and I want as much testing as I can get anyway. Sure, I'd be happy to try that. Can this be back-ported to RELEASE as well or does it need HEAD? Thanks! Robin -- Robin Sommer * Phone +1 (510) 666-2886 * ro...@icir.org ICSI/LBNL* Fax +1 (510) 666-2956 * www.icir.org ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
increasing em(4) buffer sizes
Hi there, We have a FreeBSD 7.2 Intel Server System 4GB RAM box doing traffic shaping and accounting. It has two em gigabit interfaces: one used for input, the other for output, servicing around 500-600 mbps load through it. Traffic limiting is accomplished by dynamically setting up IPFW pipes, which in turn work fine for our per-user traffic accounting needs thanks to byte counters. So the firewall is basically a longish string of pipe rules. This worked fine when the number of online users was low, but now, as we've slowly begun servicing 2-3K online users netstat -i's Ierrs column is growing at a rate of 5-15K per hour for em0, the interface used for input. Apparently searching through the firewall linearly for _each_ arriving packet locks the interface for the duration of the search (even though net.isr.direct=0), so some packets are periodically dropped on input. To mitigate the problem I've set up a two-level hash by means of skipto rules, dropping the number of up to several thousand rules to be searched for each packet to a mere 85 max, but the rate of Ierrs has only increased to 40-50K per hour, I don't know why. I've also tried setting these sysctls: hw.intr_storm_threshold=1 dev.em.0.rx_processing_limit=3000 but they didn't help at all. BTW, the other current settings are: kern.hz=4000 net.inet.ip.fw.verbose=0 kern.ipc.nmbclusters=11 net.inet.ip.fastforwarding=1 net.inet.ip.dummynet.io_fast=1 net.isr.direct=0 net.inet.ip.intr_queue_maxlen=5000 net.inet.ip.intr_queue_drops is always zero. I think the problem lies in the buffer size of em not being large enough to buffer the packets as they're arriving. I looked in /sys/dev/e1000/if_em.c and found this: in em_attach(): adapter-rx_buffer_len = 2048; and later in em_initialize_receive_unit(): switch (adapter-rx_buffer_len) { default: case 2048: rctl |= E1000_RCTL_SZ_2048; break; case 4096: rctl |= E1000_RCTL_SZ_4096 | E1000_RCTL_BSEX | E1000_RCTL_LPE; break; case 8192: rctl |= E1000_RCTL_SZ_8192 | E1000_RCTL_BSEX | E1000_RCTL_LPE; break; case 16384: rctl |= E1000_RCTL_SZ_16384 | E1000_RCTL_BSEX | E1000_RCTL_LPE; break; } So apparently the default buffer size is 2048 bytes, and as much as 16384 is supported. But at what price? Those constants do look suspicious. Can I blindly change rx_buffer_len in em_attach()? Sorry, I'm not a kernel hacker :( Thanks for reading and thanks for any tips. Any help is much appreciated. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: Multicast tftpd
On 05/18/2010 21:41, dave jones wrote: Hello, It seems that FreeBSD's tftpd doesn't support multicast. Does anyone know which multicast tftpd available on FreeBSD? Thank you. There was talk at one point about updating tftpd to the one in NetBSD. I dont remember which one but there should be a thread for this out there somewhere. -- jhell ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: increasing em(4) buffer sizes
On Wed, May 19, 2010 at 10:51:43PM +0500, rihad wrote: We have a FreeBSD 7.2 Intel Server System 4GB RAM box doing traffic shaping and accounting. It has two em gigabit interfaces: one used for input, the other for output, servicing around 500-600 mbps load through it. Traffic limiting is accomplished by dynamically setting up IPFW pipes, which in turn work fine for our per-user traffic accounting needs thanks to byte counters. So the firewall is basically a longish string of pipe rules. This worked fine when the number of online users was low, but now, as we've slowly begun servicing 2-3K online users netstat -i's Ierrs column is growing at a rate of 5-15K per hour for em0, the interface used for input. Apparently searching through the firewall linearly for _each_ arriving packet locks the interface for the duration of the search (even though net.isr.direct=0), so some packets are periodically dropped on input. To mitigate the problem I've set up a two-level hash by means of skipto rules, dropping the number of up to several thousand rules to be searched for each packet to a mere 85 max, but the rate of Ierrs has only increased to 40-50K per hour, I don't know why. I've also tried setting these sysctls: First, read: http://www.intel.com/design/network/applnots/ap450.htm You'll see you may be restricted with your NIC's chip capabilities. hw.intr_storm_threshold=1 dev.em.0.rx_processing_limit=3000 but they didn't help at all. BTW, the other current settings are: kern.hz=4000 net.inet.ip.fw.verbose=0 kern.ipc.nmbclusters=11 net.inet.ip.fastforwarding=1 net.inet.ip.dummynet.io_fast=1 net.isr.direct=0 net.inet.ip.intr_queue_maxlen=5000 net.inet.ip.intr_queue_drops is always zero. I think the problem lies in the buffer size of em not being large enough to buffer the packets as they're arriving. I looked in /sys/dev/e1000/if_em.c and found this: in em_attach(): adapter-rx_buffer_len = 2048; and later in em_initialize_receive_unit(): switch (adapter-rx_buffer_len) { default: case 2048: rctl |= E1000_RCTL_SZ_2048; break; case 4096: rctl |= E1000_RCTL_SZ_4096 | E1000_RCTL_BSEX | E1000_RCTL_LPE; break; case 8192: rctl |= E1000_RCTL_SZ_8192 | E1000_RCTL_BSEX | E1000_RCTL_LPE; break; case 16384: rctl |= E1000_RCTL_SZ_16384 | E1000_RCTL_BSEX | E1000_RCTL_LPE; break; } So apparently the default buffer size is 2048 bytes, and as much as 16384 is supported. But at what price? Those constants do look suspicious. Can I blindly change rx_buffer_len in em_attach()? Sorry, I'm not a kernel hacker :( There are loader tunnables, set them in /etc/loader.conf: hw.em.rxd=4096 hw.em.txd=4096 The price is amount of kernel memory the driver may consume. Maxumum MTU=16110 for em(4), so it can consume about 64Mb of kernel memory for that long input buffer, in theory. Some more useful tunnables for loader.conf: dev.em.0.rx_int_delay=200 dev.em.0.tx_int_delay=200 dev.em.0.rx_abs_int_delay=200 dev.em.0.tx_abs_int_delay=200 dev.em.0.rx_processing_limit=-1 Alternatively, you may try kernel polling (ifconfig em0 polling) with other tunnables: kern.hz=4000# for /boot/loader.conf kern.polling.burst_max=1000 # for /etc/sysctl.conf kern.polling.each_burst=500 Eugene Grosbein ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: increasing em(4) buffer sizes
On 05/20/2010 12:05 AM, Eugene Grosbein wrote: On Wed, May 19, 2010 at 10:51:43PM +0500, rihad wrote: We have a FreeBSD 7.2 Intel Server System 4GB RAM box doing traffic shaping and accounting. It has two em gigabit interfaces: one used for input, the other for output, servicing around 500-600 mbps load through it. Traffic limiting is accomplished by dynamically setting up IPFW pipes, which in turn work fine for our per-user traffic accounting needs thanks to byte counters. So the firewall is basically a longish string of pipe rules. This worked fine when the number of online users was low, but now, as we've slowly begun servicing 2-3K online users netstat -i's Ierrs column is growing at a rate of 5-15K per hour for em0, the interface used for input. Apparently searching through the firewall linearly for _each_ arriving packet locks the interface for the duration of the search (even though net.isr.direct=0), so some packets are periodically dropped on input. To mitigate the problem I've set up a two-level hash by means of skipto rules, dropping the number of up to several thousand rules to be searched for each packet to a mere 85 max, but the rate of Ierrs has only increased to 40-50K per hour, I don't know why. I've also tried setting these sysctls: First, read: http://www.intel.com/design/network/applnots/ap450.htm You'll see you may be restricted with your NIC's chip capabilities. Likely sooner than later these cards will be upgraded to 10 GigE ones, I just want to make sure that the delays imposed by traversing the firewall never cause traffic drops on input. There are loader tunnables, set them in /etc/loader.conf: Do you mean /boot/loader.conf ? hw.em.rxd=4096 hw.em.txd=4096 BTW, I can't read the current value: $ sysctl hw.em.rxd sysctl: unknown oid 'hw.em.rxd' $ Is this a write-only value? :) The price is amount of kernel memory the driver may consume. Maxumum MTU=16110 for em(4), so it can consume about 64Mb of kernel memory for that long input buffer, in theory. Some more useful tunnables for loader.conf: dev.em.0.rx_int_delay=200 dev.em.0.tx_int_delay=200 dev.em.0.rx_abs_int_delay=200 dev.em.0.tx_abs_int_delay=200 dev.em.0.rx_processing_limit=-1 So this interrupt delay is the much talked about interrupt moderation? Thanks, I'll try them. Is there any risk the machine won't boot with them if rebooted remotely? Alternatively, you may try kernel polling (ifconfig em0 polling) with other tunnables: kern.hz=4000# for /boot/loader.conf kern.polling.burst_max=1000 # for /etc/sysctl.conf kern.polling.each_burst=500 Wow, I successfully used polling a couple of years ago when the load was low, but then I read some posting on this list claiming that Intel cards have the ability to do fast-interrupts (interrupt moderation), but for that DEVICE_POLLING needs to be out of the kernel. So I scratched it and rebuilt the kernel for no apparent reason. Maybe you're right, polling would've worked just fine, so I may go back to that too. Eugene Grosbein ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: increasing em(4) buffer sizes
On 20.05.2010 11:33, rihad wrote: First, read: http://www.intel.com/design/network/applnots/ap450.htm You'll see you may be restricted with your NIC's chip capabilities. Likely sooner than later these cards will be upgraded to 10 GigE ones, I just want to make sure that the delays imposed by traversing the firewall never cause traffic drops on input. That document is needed to understand tunnables mentioned later. There are loader tunnables, set them in /etc/loader.conf: Do you mean /boot/loader.conf ? Yes, /boot/loader.conf hw.em.rxd=4096 hw.em.txd=4096 BTW, I can't read the current value: $ sysctl hw.em.rxd sysctl: unknown oid 'hw.em.rxd' $ Is this a write-only value? :) They are loader tunnables that have no sysctl variables with same names - as an example of such rare case :-) They are documented in man em, though. The price is amount of kernel memory the driver may consume. Maxumum MTU=16110 for em(4), so it can consume about 64Mb of kernel memory for that long input buffer, in theory. Some more useful tunnables for loader.conf: dev.em.0.rx_int_delay=200 dev.em.0.tx_int_delay=200 dev.em.0.rx_abs_int_delay=200 dev.em.0.tx_abs_int_delay=200 dev.em.0.rx_processing_limit=-1 So this interrupt delay is the much talked about interrupt moderation? Thanks, I'll try them. Is there any risk the machine won't boot with them if rebooted remotely? Yes, for hw.em.rxd/hw.em.txd only. em interfaces will not function if you set these two too high for your nic chip, so read man em and mentioned Intel document carefully. It's pretty safe to play with other tunnables. Alternatively, you may try kernel polling (ifconfig em0 polling) with other tunnables: kern.hz=4000 # for /boot/loader.conf kern.polling.burst_max=1000 # for /etc/sysctl.conf kern.polling.each_burst=500 Wow, I successfully used polling a couple of years ago when the load was low, but then I read some posting on this list claiming that Intel cards have the ability to do fast-interrupts (interrupt moderation), Yes, they have but some tuning for interrupt moderation may be needed. but for that DEVICE_POLLING needs to be out of the kernel. That may be true long time ago but not for 7.x afaik. Just use ifconfig to enable/disable polling per-device. So I scratched it and rebuilt the kernel for no apparent reason. Maybe you're right, polling would've worked just fine, so I may go back to that too. For ng_source-based outgoing flood of 64 byte UDP packets in test lab I've obtained best results with polling enabled using FreeBSD 7.1 For Pentium-D 2.8Ghz dualcore and desktop motherboard Intel D975XBX with integrated NIC: e...@pci0:4:0:0: class=0x02 card=0x30a58086 chip=0x109a8086 rev=0x00 hdr=0x00 vendor = 'Intel Corporation' device = '82573L Intel PRO/1000 PL Network Adaptor' class = network subclass = ethernet I've got 742359pps / 772Mbps at wire. Note, that was not forwarding speed, only outgoing UDP flood. The CPU horsepower was limiting point, one core was fully loaded and another one idle. Eugene Grosbein ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org