> On 13 Jan 2016, at 19:19, Marko Cupać <marko.cu...@mimar.rs> wrote:
>
> On Tue, 12 Jan 2016 16:40:58 +0100
> Claudio Jeker <cje...@diehard.n-r-g.com> wrote:
>
>> On Tue, Jan 12, 2016 at 05:33:06AM -0700, Daniel Melameth wrote:
>>> On Mon, Jan 11, 2016 at 9:37 PM, David Gwynne <da...@gwynne.id.au>
>>> wrote:
>>>>> On 11 Jan 2016, at 22:43, Daniel Melameth <dan...@melameth.com>
>>>>> wrote: On Sun, Jan 10, 2016 at 7:58 AM, Marko Cupa??
>>>>> <marko.cu...@mimar.rs>
>>> wrote:
>>>>>> On Sat, 9 Jan 2016 11:11:27 -0700
>>>>>> Daniel Melameth <dan...@melameth.com> wrote:
>>>>>>> You NEED to set a max on your ROOT queues.
>>>>>> I came to this conclusion as well. But not only on root queues.
>>>>>> For example, when max is set on root queue but only bandwidth
>>>>>> on child queues, no shaping takes place...
>>>>> This works for me.
>>>>>> Or, to cut the long story short, if someone can paste queue
>>>>>> definition which accomplishes 'give both queues max bandwidth,
>>>>>> but throttle traffic from first queue when traffic from the
>>>>>> second one arrives', I will be more than happy to quit
>>>>>> bothering misc@ list readers with my rants and observations.
>>>>> I would expect this to be possible with prio alone, but I've
>>>>> never been able to get it to work.  Perhaps I'm misunderstanding
>>>>> how prio works.
>>>> prio is basically an array of lists of packets to be transmitted.
>>>> high
>>> priority packets go on a different list to low priority packets.
>>>>
>>>> the problem is the way packets go on and off these lists.
>>>> basically as soon
>>> as a packet is queued on one of these lists for transmission, we
>>> call the driver immediately to send it. generally as soon as a
>>> packet is queued on the interface, it immediately gets dequeued by
>>> the driver and transmitted on the hardware.
>>>>
>>>> it is only when you build up a backlog of packets that priq can
>>>> come into
>>> effect. the only way you can build up a backlog of packets is if
>>> your hardware is slower at transmitting packets than the thing that
>>> generates these packets to send.
>>>>
>>>> in your case you're probably getting packets from a relatively
>>>> slow internet
>>> connection and transmitting them on a high speed local network. the
>>> transmit hardware is almost certainly going to be faster than your
>>> source of packets, so you'll never build up a queue of backlogged
>>> packets, so prio is effectively a nop.
>>>>
>>>> dlg
>>>
>>> Thanks for taking the time to chime in guys.  Prior to implementing
>>> any queueing, I tested this stuff out on a LAN--so no slower
>>> connectionswere involved--and I was unable to see prio in action, at
>>> least not with any observable similarity to ALTQ's PRIQ.
>>>
>>> A simple rule set:
>>>
>>> match out on egress proto tcp to port 12345 set prio 7
>>> match out on egress proto tcp to port 12346 set prio 0
>>> pass
>>>
>>> Using tcpbench to push packets into both queues, I would have
>>> expected the packets destined for port 12346 to get throttled, but
>>> both flows simply reached an equilibrium, which I would have
>>> expected without prio.  Under PRIQ, I would have seen the flow to
>>> port 12346 get almost completely starved of bandwidth.  When doing
>>> non-prio queuing with a similarly simple ruleset, both flows
>>> properly matched their target bandwidth.
>>
>> This assumes that you manage to fill the TX interface queue to a level
>> that it always fills the tx DMA rings before being empty. On high
>> speed interfaces this most of the time not the case and so both
>> sessions are able to reach the maximum bandwidth.
>> To be honest prio queue only make sense when you have a slow interface
>> (10Mbps) or a shaper in place that causes the queue to fill up.
>> There is currently no shaper you can use together with the prio
>> queues so only option one remains.
>>
>
> Have we come to conclusion that currently prio makes no sense at all?

it wont have the effect you want. that doesn't mean it doesn't make sense
somewhere else.

>
> Can I hope that saying 'currently' means this is not the intended
> design? Or should I come to peace with the fact that with OpenBSD and
> PF I can forget about shaping inbound TCP traffic in a way that
> child queues can expand to max link bandwidth unless there is a
> congestion, while in congestion admin can choose which child queues to
> throttle and in which order?

hfsc might need some work at the code level, it might just suck to configure.

>
> --
> Before enlightenment - chop wood, draw water.
> After  enlightenment - chop wood, draw water.
>
> Marko Cupać
> https://www.mimar.rs/

Reply via email to