On Fri, Oct 13, 2017 at 7:47 PM, Mike Belopuhov <m...@belopuhov.com> wrote:
> On Fri, Oct 13, 2017 at 05:33 +0000, Daniel Melameth wrote:
>> In playing around with the new CoDel/fair traffic sharing, it's not
>> clear to me the best way to work with this when also using the
>> previous queuing.
>
> It's not clear to me either at least not in the generic case :-)
> I guess it depends on what you're trying to achieve.
>
>> Will CoDel still work as expected if all my child queues have flows,
>> but my root queue is using "fifo" (revealed with systat queues)?
>
> Depends on what you expect CoDel to do.  Normally the idea here is
> to set an upper bound on latency that all outgoing packets experience.
> For example if you have 10 connections and 2 are uploading data and
> other 8 are a mix of ACKs and SSH keystrokes, with FIFO you'd normally
> see bulk connections saturating the link and not leaving other 8
> connections a chance to send a packet.
>
> So you go and create those HFSC queues and try to reserve the bandwidth
> for your ACKs, SSH and whatnot.  The approach that FQ-CoDel takes is
> different.  You no longer need to reserve bandwidth as FQ-CoDel attempts
> to make the bandwidth "available" when needed -- this is what is fair
> sharing essentially.  Which in practice means that those 8 connections
> are able to send their small packets "practically" whenever they want
> without disrupting your uploads.
>
> This means that if all you want is to be able for your outgoing
> connections to fair share the bandwidth you don't need to reserve the
> bandwidth at all.
>
>> Assuming it does, if one of my child queues is just for TCP ACKs, does
>> it make sense to have a small quantum for this queue, but a larger
>> quantum for a child queue that focuses on bulk file transfers?
>
> Quantum of service just tilts the balance at the expense of extra CPU
> cycles and potentially extra overall latency.  I think you need to
> figure out the big picture first and then fine tune.
>
>> Or is
>> CoDel orthogonal of child queues and it only really works well with a
>> single root flow queue (and requires me to give up bandwidth control
>> with child queues)?
>
> "Works well this way or that way" would imply that we have enough data
> to make such a judgement.  At the moment we don't.  Last week we had it
> running with 8192 flows feeding into an LTE connection with a rather
> flaky 50Mbit/s downlink (150Mbit/s up) for about a hundred of users.
> With a few HFSC tweaks we had almost no observable SSH latency with
> ping times to 8.8.8.8 of about 25ms with fairly low variation.  This
> setup used two root queues: one on the uplink, one on the downlink.
>
>> Also, the pf.conf man page says the default qlimit is 1024, but, if I
>> don't specify a qlimit, pfctl –vsq shows a qlength of 50 when I was
>> expecting it to be 1024.  What am I missing?
>
> I've updated the man page today to address some of the concerns since
> the same question was also brought up on reddit yesterday:
>
> https://www.reddit.com/r/openbsd/comments/75ps6h/fqcodel_and_pf/
>
> The gist of it is that 1024 is not the HFSC default.  When you're
> specifying both "flows" and "bandwidth" thus requesting an FQ-CoDel
> queue manager for your HFSC queue, the HFSC default qlimit (50) is
> still applied.  It's a bit counter-intuitive I guess, so I've removed
> mention of this from the man page.

Thanks for taking the time for a detailed reply Mike.  From your
Reddit post, it seems, for those queues that use both flows and
bandwidth, it makes sense to always override the HFSC qlimit default,
but will this increase latency (in the same way a queue with no flows
will increase latency with a higher qlimit)?  I'll see what I can dig
up on CoDel and quantum outside of OpenBSD circles.

That said, I've been piloting various queuing scenarios in a Hyper-V
environment, but I haven't been able to make much progress here as, it
appears, there's some timing issue with HFSC and/or hvn(4) (thank you
for terminating my use of de(4), which was horrible under Hyper-V!); I
can never seem to reach my modest bandwidth specifications with
something like tcpbench, but perhaps this is better left for another
thread or I should just get on the vmd(8) bandwagon.

Cheers.

Reply via email to