Pete Heist <petehe...@gmail.com> writes: >> On Nov 28, 2017, at 8:07 PM, Dave Taht <d...@taht.net> wrote: >> >> Pete Heist <petehe...@gmail.com> writes: >> >>> *** Round 3 Plans: >>> >>> * Use netem to test a spread of simulated rtts and bandwidths. >> >> Since you are leveraging a few too few boxes, attached are my current >> scripts for fiddling a bit with network namespaces. I added individual >> ssh, irtt, etc, servers so that things like flent's ssh stuff should >> just work for polling stats. > > You mean a few too few virtual boxes, not necessarily physical ones, right? :) > In other words, there are different topologies that can be used for testing. > In > your scripts you’re simulating "Internet access", where client is the family > or > organization for example and server is stuff on the Internet: > > client - middlebox - delay - server > > In my earlier point-to-point WiFi testing I was simulating an ISP’s backhaul: > > client - client_router - station ----- ap - server_router - server > > In my current testing I’m simulating, well, something far less useful if I > think about it- two boxes blasting traffic to one another over a cable and > trying to improve queueing delays between them: > > client - client_router --- server_router - server
Yes, in this case, the local TCP stack tends to interact better with fq_codel than cake does, due in part to the NET_XMIT_CN support there. I confess that my fav takeaway of your results thus far has been - it doesn't crash. It still doesn't crash, here, either. We could upstream it this week. :) > > I hadn’t realized how heavy-weight traffic generation for anything beyond 4/4 > flows would be at Gbit rates, or how confusing and trivial some of these > results would be. > > So beyond just my vague idea to “simulate a spread of rtts and bandwidths”, I > see I need a topology change to produce something more useful. I think there > are still two options: Yes, I wouldn't trust the result with netem on what you got very far. > 1) Point-to-point WiFi again, where I’d be using two NSM5s and testing over > short range at rates of up to 100 Mbps. I would probably try to get FreeNet’s > APU version 1 boxes into action again so I’d have physical devices for each of > the six roles above. I wish I didn’t have to use those RTL8111Es, but that’s > how > it is. > > 2) Your “Internet access” setup. Either I can get the veth stuff into action > on > a single physical APU2 (powerful enough?), or I can try to set up four > physical > boxes with the same topology (or if I were tricky, try to spread the four > roles > across two boxes). Getting the veth stuff setup for a basic test should be plug and go with what I gave you. And you do have 4 cores, so try a few tests at 900Mbit to see what happens. I think george has the most powerful box we have readily available, and perhaps it would be easier for him to give netns a go. I cannot trust the results we get from the cloud (too many unknown VMs sharing the hardware)... so I keep thinking, for christmas, I will finally get around to replacing snapon (which is doing lede build and server duty in sweden). I am presently getting in a good 40+ minute nap in between kernel builds, which that would also solve. (I like my naps tho). All my computers are pretty low power - only the core I5 nucs and laptops have fans which only kick in on builds - and I like a quiet work environment, so trying to build the fastest box possible without howling fans would be ideal. snapon (6 cores) cost about 2.5k when we got it (5? years ago). It looks like for about the same price we can get to at least 8 cores. Things start going pear-shaped on (for example) the AMD threadripper (16 cores) - or Xeons, at 1k for the cpu at min (but a 30 second kernel build) We could try to get a dedicated rack mount somewhere, but I don't know where, or how much. I rather enjoyed what esr pulled off with "the great beast". He had different requirements - in my case I'd like a sharable box for builds and simulations. Were these actual simulations (e.g. ns3) the virtual clock would be ok to run in the cloud, but since we're on bare metal, we'd need bare metal. Worse, to do this truly right, actually starting to fiddle with 10Gig+ hardware in a realistic topology could also be of use. Arguably those of you in Northern Europe need space heaters more than I do... PS: I note the veth file I sent had two errors in it - vdaemons was meant to be vssh.sh and the netem delay component should have had a limit 100000 to it. I'm still using these simplistic shell scripts 'cause I havent found anything better to construct topologies with (want ipv6), as yet, and I should probably get around to a public repo for 'em. > > #1 would be useful for FreeNet. Would it also be useful for Cake testing in > general, or would you prefer more #2 results at this stage (i.e. simulating > dsl, cable, satellite, etc)? I'm really happy with this stuff right now. I haven't taken apart the tarball from the last attempt yet. 16x1, 10x1 results with ack filtering against DSL speeds and typical cable speeds seem plausible. _______________________________________________ Cake mailing list Cake@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/cake