Re: pf and max bandwidth in nested queues (bug?)

2017-11-06 Thread Marko Cupać
I've just given a spin to 6.2. And queueing in PF actually does all I
want it to do - giving child queues max bandwidth of parent queue when
parent queue is unsaturated, and throttling them down to set bandwidth
when parent queue is saturated.

Now those few years of pf queueing problems look so far away, almost
like they never happened :) Thanks to people who made it possible.
-- 
Before enlightenment - chop wood, draw water.
After  enlightenment - chop wood, draw water.

Marko Cupać
https://www.mimar.rs/



Re: pf and max bandwidth in nested queues (bug?)

2017-11-02 Thread Marko Cupać
On Wed, 1 Nov 2017 13:22:03 +
Oliver Humpage  wrote:

> Hello,
>
> I have an OpenBSD 6.2 router, set up in a test rig so there's no
> traffic apart from my tests. It has vmx interfaces. $int_if is a vlan
> on one of them.
> 
> I have an issue where if a child queue has a different “max” from a
> parent queue, the bandwidth is throttled down to much less than
> either.

Hi fellow adventurer in PF queuing :)

I'd like authoritative, correct, field-tested answers to a number of
questions related to PF queuing, but at the moment it appears there
aren't any. pf.conf(5) doesn't say much, PF FAQ's chapter on queuing
is in the attic for quite some time now:
http://cvsweb.openbsd.org/cgi-bin/cvsweb/www/faq/pf/Attic/queueing.html

So I guess it's you and me and maybe someone else on this list who will
have to test and get those answers from those tests.

I haven't yet get to do any tests on 6.2, but from my experience, the
only way for queuing to work as expected is to set all three -
declared, min and max bandwidth on parent, and all the child queues to
the same value, where sum of child queues has to be less or equal to
parent queue. Pay attention to the fact that only new states go to
appropriate queues, so (from my experience) every ruleset change needs
flushing of states (pfctl -F states). If you have NAT in the mix it
complicates things further, and I think tagging packets inbound on
internal interface, and queueing them on external interface by tags is
the way to go.

You will get different answers from different people regarding inbound
(interface-wise) queuing - most people say it has no effect, but some
people say it puts return traffic into appropriate queues, so it
apparently does have effect. Go figure, and let me know if you do :)

If you search misc@ list for my posts, you will find quite a number of
rants regarding PF queuing. Not much useful info tho.

Now, what I'd really like to know is, if I have let's say 4Mbit uplink,
and 4x1Mbit declared queues (without min and max values), what is the
logic of borrowing bandwidth from non-saturated queues. Because I can't
for love of my life make any sense of it.

That being said, all the alternatives to OpenBSD are worse. I guess we
need to keep trying :)

Regards,
-- 
Before enlightenment - chop wood, draw water.
After  enlightenment - chop wood, draw water.

Marko Cupać
https://www.mimar.rs/



Re: pf and max bandwidth in nested queues (bug?)

2017-11-01 Thread Erik van Westen
Op 1-11-2017 om 14:22 schreef Oliver Humpage:
> Hello,
>
> I have an OpenBSD 6.2 router, set up in a test rig so there's no traffic 
> apart from my tests. It has vmx interfaces. $int_if is a vlan on one of them.
>
> I have an issue where if a child queue has a different “max” from a parent 
> queue, the bandwidth is throttled down to much less than either.
>
> I have the following simple queue tree (eventually it will be bigger, this is 
> just for testing):
>
> queue inbound on $int_if bandwidth 100M
>   queue inbound_all parent inbound bandwidth 30M max 30M
> queue inbound_std parent inbound_all bandwidth 20M max 30M default
> pass on $int_if
>
> This works, and an iperf test shunting data through the router from ext->int 
> gets around 30Mb as expected.
>
> If I change the inbound_all queue's max to a slightly higher number, this 
> shouldn’t have any effect at all - after all, the inbound_std queue is still 
> "bandwidth 20M max 30M", and neither of these numbers exceed the parent:
>
> queue inbound on $int_if bandwidth 100M
>   queue inbound_all parent inbound bandwidth 30M max 40M
>  ^^^
> queue inbound_std parent inbound_all bandwidth 20M max 30M default
> pass on $int_if
>
> However, when I do this, suddenly connections assigned to inbound_std only 
> get around 2.3Mb. 
>
> ``systat q’’ shows all packets are going into the correct queue.
>
> As an experiment, I put a “min” level on inbound_std:
>
> queue inbound_std parent inbound_all bandwidth 20M min 10M max 30M default
>
> Then connections get that minimum bandwidth (here, iperf reported around 
> 10Mb), so it shows the queue *can* use more than 2.3Mb, but it still sticks 
> to the min rather than using all available bandwidth.
>
> This seems like a bug to me, although I’m hesitant to suggest it since I have 
> a lot of respect for the OpenBSD team. Does anyone have a suggestion as to 
> what’s happening?
>
> Thanks,
>
> Oliver.
>

I might be mistaken, but doesn't queueing only work on OUTgoing traffic
since one cannot control the rate at which traffic is delivered to you,
but one can control the rate of traffic going out of an interface?

Erik



pf and max bandwidth in nested queues (bug?)

2017-11-01 Thread Oliver Humpage
Hello,

I have an OpenBSD 6.2 router, set up in a test rig so there's no traffic apart 
from my tests. It has vmx interfaces. $int_if is a vlan on one of them.

I have an issue where if a child queue has a different “max” from a parent 
queue, the bandwidth is throttled down to much less than either.

I have the following simple queue tree (eventually it will be bigger, this is 
just for testing):

queue inbound on $int_if bandwidth 100M
  queue inbound_all parent inbound bandwidth 30M max 30M
queue inbound_std parent inbound_all bandwidth 20M max 30M default
pass on $int_if

This works, and an iperf test shunting data through the router from ext->int 
gets around 30Mb as expected.

If I change the inbound_all queue's max to a slightly higher number, this 
shouldn’t have any effect at all - after all, the inbound_std queue is still 
"bandwidth 20M max 30M", and neither of these numbers exceed the parent:

queue inbound on $int_if bandwidth 100M
  queue inbound_all parent inbound bandwidth 30M max 40M
 ^^^
queue inbound_std parent inbound_all bandwidth 20M max 30M default
pass on $int_if

However, when I do this, suddenly connections assigned to inbound_std only get 
around 2.3Mb. 

``systat q’’ shows all packets are going into the correct queue.

As an experiment, I put a “min” level on inbound_std:

queue inbound_std parent inbound_all bandwidth 20M min 10M max 30M default

Then connections get that minimum bandwidth (here, iperf reported around 10Mb), 
so it shows the queue *can* use more than 2.3Mb, but it still sticks to the min 
rather than using all available bandwidth.

This seems like a bug to me, although I’m hesitant to suggest it since I have a 
lot of respect for the OpenBSD team. Does anyone have a suggestion as to what’s 
happening?

Thanks,

Oliver.