On 2016-06-10 7:11 AM, Jed Clear wrote: >> On Jun 9, 2016, at 11:01 PM, Andrew Atrens <[email protected]> wrote: >>> On 2016-06-09 8:47 PM, Jed Clear wrote: >>> With the current set up, I ran top during the download. Never got lower >>> than 25% idle time on the CPU. ~30% system and and 40+% interrupt. 384M >>> (of 512M) Free on memory, so no issue there. So doesn’t seem to be pegging >>> the CPU with my full rule set. >> That's interesting. The relationship between cpu use and throughput is >> pretty linear. You should be able to 'peg' your cpu unless something >> (ipfw?) is somehow throttling. You don't have, by chance, 'options >> DUMMYNET' configured? > My regular rules do have dummy net queues on the uplink, which are now OBE. > And I had initially overlooked the bandwidth setting there. In any event, the > built in "simple" rule set doesn't I think. Hmm .. a number of years ago (maybe around 2008) I experimented with ipfw and have a vague memory of seeing some hardcoded macros in use in the code. These would be day-1 things because they were present in the DragonFlyBSD fork back then. > >> It might also be an tcp-ack-prioritization issue. You did mention that >> your uplink speed was kind of crappy. Uplink saturation will affect >> downlink speed if tcp acks aren't getting upstream quickly enough. >> Maybe ipfw is somehow exacerbating that. > 5-6Mbps, crappy only in that $BIG_CABLE_CO disabled 5 of the 8 uplink > channels when they "provisioned" my cable modem. That should be oodles of bandwidth. > > >> If the l2 bridge thing works and you're able to peg your net5501 cpu >> then, notwithstanding vr driver fixes you're probably at the limit. >> >> BUGS >> The vr driver always copies transmit mbuf chains into longword-aligned >> buffers prior to transmission in order to pacify the Rhine chips. If >> buffers are not aligned correctly, the chip will round the supplied >> buffer address and begin DMAing from the wrong location. This buffer >> copying impairs transmit performance on slower systems but cannot be >> avoided. On faster machines (e.g. a Pentium II), the performance >> impact >> is much less noticeable. > Interesting. You'd think the drivers would align the packets on Rx and I > vaguely recall that FreeBSD managed to go zero copy a few major versions ago. Yes, it should be zero copy (or nearly always zero copy). I add that caveat as the ipfw code has been somewhat neglected. > Although that could be for straight forwarding. Yes. In the forwarding case there should be no need to modify the mbuf. > IPFW and NAT might behave differently. Not sure I'd have any control over > it. > > Also recalled that I was running ntpd and named (caching) on the 5501. > Turning off ntpd added 5 to the download speed. Turning both off didn't > increase that. Good to have those datapoints .. not surprised that named caching wouldn't help for raw throughput. But I am surprised that disabling ntpd did help.
> Didn't think to unplug the GPS which causes a 1 PPS DCD interrupt as well as > the 4800 bps serial interrupts. Will give that a try, too. I have the PPS > option in the kernel if anyone knows of an interaction. Well, PPS stuff could impact netisr which might affect receive processing. There is a neat little tool called 'netstrain' - http://netstrain.sourceforge.net that would likely be helpful. It's small enough to build statically and run on nanobsd. You could run it directly on the 5501 to isolate tx vs rx performance. I think it uses udp which would help figure out if there's any weirdness around tcp, mtu, etc. > > -Jed > _______________________________________________ Soekris-tech mailing list [email protected] http://lists.soekris.com/mailman/listinfo/soekris-tech
