On Sat, Jul 28, 2018 at 11:03 PM, Dave Taht wrote:
>
>
> On Sat, Jul 28, 2018 at 11:39 AM Pete Heist wrote:
>
>>
>> On Jul 28, 2018, at 7:32 PM, Dave Taht wrote:
>>
>>
>> Exactly. Many members, including myself, are limited by our CPE links
>> during off hours, and by the backhaul during high t
> On Jul 28, 2018, at 8:12 PM, Toke Høiland-Jørgensen wrote:
>
> Priority field sets tin, class sets flow. Both need the qdisc is as its major
> number, iirc. And both can be set from the same bpf filter which can be run
> in direct action mode...
This works for me. :)
I only tested so far b
so I was tempted to try. I didn't enable namespaces. just coded this
little bit up...
a very quick test of this setup showed I wasn't routing packets through the
veth interfaces so I figure network namespaces, and... oh look! the sun's out!
#!/bin/bash
# Create the bridge
ip link add cable type
> On Jul 28, 2018, at 9:03 PM, Dave Taht wrote:
>
> under load on the NanoStation 5 AC Loco’s I got for the camp’s backhaul. Is
> it really that good? This is in contrast to the 50+ms I see with rrul_be on
> the NanoStation M5 (without controlling the queue).
>
> ubnt both cases? doubt it's
If I get time, what I might try is:
one veth per user with cake bandwidth whatever
routing (no bpf) the ip address subset to each veth
10x1 oversubscription
bridging all those veths together into one interface and... just applying pie
or codel to that 'cause there are just too many flows to conce
On Sat, Jul 28, 2018 at 11:39 AM Pete Heist wrote:
>
>
> On Jul 28, 2018, at 7:32 PM, Dave Taht wrote:
>
>
> Exactly. Many members, including myself, are limited by our CPE links during
> off hours, and by the backhaul during high traffic hours.
>
>
> 3 items
>
> 1) Co-locating some essential se
On Sat, Jul 28, 2018 at 11:39 AM Pete Heist wrote:
>
> On Jul 28, 2018, at 7:32 PM, Dave Taht wrote:
>
>
> Exactly. Many members, including myself, are limited by our CPE links
> during off hours, and by the backhaul during high traffic hours.
>
>
> 3 items
>
> 1) Co-locating some essential serv
> On Jul 28, 2018, at 7:32 PM, Dave Taht wrote:
>>
>> Exactly. Many members, including myself, are limited by our CPE links during
>> off hours, and by the backhaul during high traffic hours.
>
> 3 items
>
> 1) Co-locating some essential services (like netflix) might be of help.
That’s a goo
On 28 July 2018 19:53:58 CEST, Jonathan Morton wrote:
>>> Note that with the existing tc classifier stuff we already added to
>>> Cake, we basically have this already (eBPF can map traffic to tin
>and
>>> flow however it pleases).
>>
>> Sorry, this just jostled in my brain now that I may be abl
Priority field sets tin, class sets flow. Both need the qdisc is as its major
number, iirc. And both can be set from the same bpf filter which can be run in
direct action mode...
-Toke
On 28 July 2018 19:56:35 CEST, Dave Taht wrote:
>https://github.com/iovisor/bcc/blob/master/src/cc/compat/lin
On Sat, Jul 28, 2018 at 10:54 AM Jonathan Morton wrote:
>
> >> Note that with the existing tc classifier stuff we already added to
> >> Cake, we basically have this already (eBPF can map traffic to tin and
> >> flow however it pleases).
> >
> > Sorry, this just jostled in my brain now that I may b
https://github.com/iovisor/bcc/blob/master/src/cc/compat/linux/bpf.h#L
says you can get at the priority field.
On Sat, Jul 28, 2018 at 10:52 AM Dave Taht wrote:
>
> On Sat, Jul 28, 2018 at 10:38 AM Pete Heist wrote:
> >
> >
> > On Jul 28, 2018, at 10:56 AM, Toke Høiland-Jørgensen wrote:
> >
>> Note that with the existing tc classifier stuff we already added to
>> Cake, we basically have this already (eBPF can map traffic to tin and
>> flow however it pleases).
>
> Sorry, this just jostled in my brain now that I may be able to implement
> member fairness today, based on what you wrot
On Sat, Jul 28, 2018 at 10:38 AM Pete Heist wrote:
>
>
> On Jul 28, 2018, at 10:56 AM, Toke Høiland-Jørgensen wrote:
>
> Note that with the existing tc classifier stuff we already added to
> Cake, we basically have this already (eBPF can map traffic to tin and
> flow however it pleases).
>
>
> So
> On Jul 28, 2018, at 10:56 AM, Toke Høiland-Jørgensen wrote:
>
> Note that with the existing tc classifier stuff we already added to
> Cake, we basically have this already (eBPF can map traffic to tin and
> flow however it pleases).
Sorry, this just jostled in my brain now that I may be able t
On Sat, Jul 28, 2018 at 9:41 AM Pete Heist wrote:
>
> On Jul 28, 2018, at 10:06 AM, Jonathan Morton wrote:
>
> This sounds like a relatively complex network topology, in which there are a
> lot of different potential bottlenecks, depending on the dynamic state of the
> network.
>
>
> It is, whi
> On Jul 28, 2018, at 5:04 PM, Dave Taht wrote:
>
> I'm lovin this discussion.
>
> a couple notes:
>
> 1) IF you go the full monty and create an isp oriented qdisc, for
> gawd's sake come up with a googleable name.
> Things like pie, cake, bobbie, tart are good codenames, fq_codel
> horrific,
so, as I gradually built up my lab over the past few weeks I slammed
ubuntu into one of my prized apu-2s to take a look at cake. This
particular box has 4 hardware queues, so I made cake the default qdisc
to see what happened at line rate. Running netserver locally on it
(which is a dumb idea, I ju
> On Jul 28, 2018, at 10:06 AM, Jonathan Morton wrote:
>
> This sounds like a relatively complex network topology, in which there are a
> lot of different potential bottlenecks, depending on the dynamic state of the
> network.
It is, which is the argument from those who want a more centralized
On Sat, Jul 28, 2018 at 9:19 AM Jonathan Morton wrote:
>
> > On 28 Jul, 2018, at 6:04 pm, Dave Taht wrote:
> >
> > for gawd's sake come up with a googleable name.
> > Things like pie, cake, bobbie, tart are good codenames, fq_codel
> > horrific, "streamboost" is a canonical example of a great nam
On Sat, Jul 28, 2018 at 9:11 AM Jonathan Morton wrote:
>
> > On 28 Jul, 2018, at 6:51 pm, Dave Taht wrote:
> >
> > That's also pretty low end. On the high end nowadays there's stuff like
> > this:
> >
> > https://www.amazon.com/Intel-Xeon-E5-2698-Hexadeca-core-Processor/dp/B00PDD1QES
>
> Intel i
> On 28 Jul, 2018, at 6:04 pm, Dave Taht wrote:
>
> for gawd's sake come up with a googleable name.
> Things like pie, cake, bobbie, tart are good codenames, fq_codel
> horrific, "streamboost" is a canonical example of a great name.
Suggestions on a postcard.
- Jonathan Morton
___
> On 28 Jul, 2018, at 6:51 pm, Dave Taht wrote:
>
> That's also pretty low end. On the high end nowadays there's stuff like this:
>
> https://www.amazon.com/Intel-Xeon-E5-2698-Hexadeca-core-Processor/dp/B00PDD1QES
Intel is no longer high-end for x86 CPUs. Not all of the market has realised
th
On Thu, Jul 26, 2018 at 11:07 AM Dan Siemon wrote:
>
> On Thu, 2018-07-26 at 08:48 -0700, Dave Taht wrote:
> > On Thu, Jul 26, 2018 at 8:46 AM Dan Siemon wrote:
> > >
> > > Tiny bit of self promotion here but Preseem (
> > > https://www.preseem.com)
> > > is a transparent bridge that leverages HT
I'm lovin this discussion.
a couple notes:
1) IF you go the full monty and create an isp oriented qdisc, for
gawd's sake come up with a googleable name.
Things like pie, cake, bobbie, tart are good codenames, fq_codel
horrific, "streamboost" is a canonical example of a great name. At the
moment I
Jonathan Morton writes:
> Yes, eBPF does seem to be a good fit for that.
>
> So in summary, the logical flow of a packet should be:
>
> 1: Map dst or src IP to subscriber (eBPF).
> 2: Map subscriber to speed/overhead tier (eBPF).
> 3: (optional) Classify Diffserv (???).
> 4: Enqueue per flow, han
> There are some older backhaul routers still with 2.6.26.8(!) although those
> are being phased out so don’t count them. More current ones use 3.16.7 and
> there’s some discussion but I’m not sure what/when the upgrade plan is. I
> think the Internet router uses a more modern Debian 9 which is
> On Jul 26, 2018, at 11:38 PM, Jonathan Morton wrote:
>
> It would also be valuable to have a firmer handle on the actual requirements
> in the field. For example, if it is feasible to focus only on current Linux
> kernels, then a lot of backwards compatibility cruft can be excised when
> im
28 matches
Mail list logo