thank you david. i am reading and experimenting around. peter also gave me this 
very interesting tip about overloading tables and i am researching in my spare 
time. i will try to create a setup that uses these capability of pf. i also 
know that pf is very impressive in managing tcp connections, connection counts 
from the same ip, etc. maybe i can find a way to manage bandwidth per ip by 
watching state counts, limiting tcp connections, adjusting window size, 
creating different tables, assigning them priorities and so on. i really do not 
know enough openbsd and the inner mechanics of unix to get involved with the C 
API but thanks also for that tip.

i really hope to find a practical application. it does not have to be exact to 
the point. neither are other mechanisms on the market. they mostly provide 
statistical fairness. but this kind of queueing is really important for many 
network admins. especially those who manage publicly accessible or campus 
networks with changing users. instead of chasing rabbits, divide bandwidth 
equally and give the users (some) freedom in how they want to use their 
bandwidth and be done with it.

anyway, i will try to formulate something and let you guys know. expect more 
questions on the way :)

thanks to all.

On 05 Feb 2016, at 11:59, Dahlberg, David 
<david.dahlb...@fkie.fraunhofer.de<mailto:david.dahlb...@fkie.fraunhofer.de>> 
wrote:

Am Donnerstag, den 04.02.2016, 14:41 +0000 schrieb Tarkan Açan:
what i want to achieve is, say we have a parent queue of 10M. when 5
users connect, they should all receive 2M bandwidth each. when 5 more
users connect, i want to bog down their bandwidth to 1M each. when the
connected users drop down to 8, i want to give them 1.25M each. i do
not have a certain number of users. the number constantly changes.

What you can do is the following:

queue root on em0 bandwidth 10M max 10M
queue q01 parent root bandwidth 2M max 2M
queue q02 parent root bandwidth 2M max 2M
...
queue q99 parent root bandwidth 2M max 2M

What this will do is the following: It gives all users an equal
linkshare[1] that maxes out at 2M[2].



[1] The absolute number in "bandwidth 2M" is really irrelevant. In
reality it is just a better memorizable wording for the link-share (m2)
service curve of HFSC[3]. So "2M/2M/2M/2M" = "1/1/1/1" = "1G/1G/1G/1G" =
"equal link-sharing".

[2] If you just want link-sharing without a maximum, remove the "max"
parameter.

[3] http://conferences.sigcomm.org/sigcomm/1997/papers/p011.pdf
http://linux-ip.net/articles/hfsc.en/
http://linux-tc-notes.sourceforge.net/tc/doc/sch_hfsc.txt
http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-5.4/man5/pf.conf.5
Vocabulary:
  "bandwidth" ~= "linkshare/ls m2"
  "min" ~= "realtime/rt"
  "max" ~= "upperlimit/ul"

CAVEATS:
10M is probably be a very low percentage of your overall interface
bandwidth (1%?). Currently, the OpenBSD's new queueing system does not
work very well in these circumstances because of HZ discretization and
rounding errors.

the config set of pf does not change until you load pf.conf again so
adding and removing queues dynamically seems not possible to me.

If you principally know your users in advance, you can configure them
statically. The link-share calculation only takes into account the
resources that are actually used.

But pf also supports dynamics:

tables:
* Configure as many queues as you're likely to require at the
  same time.
* Classify with tables.
* Tables are modifiable during runtime (pfctl -t)

anchors:
* Uhm, dunno. You probably have to use some C API. Find out yourself.

Cheers,

David

Reply via email to