Re: PF queue bandwidth limited to 32bit value

2023-09-16 Thread Andy Lemin



> On 15 Sep 2023, at 18:54, Stuart Henderson  wrote:
> 
> On 2023/09/15 13:40, Andy Lemin wrote:
>> Hi Stuart,
>> 
>> Seeing as it seems like everyone is too busy, and my workaround
>> (not queue some flows on interfaces with queue defined) seems of no
>> interest,
> 
> well, it might be, but I'm not sure if it will fit with how
> queues work..

Well I can only hope some more developers sees this :)

> 
>> and my current hack to use queuing on Vlan interfaces is
>> a very incomplete and restrictive workaround; Would you please be
>> so kind as to provide me with a starting point in the source code
>> and variable names to concentrate on, where I can start tracing from
>> beginning to end for changing the scale from bits to bytes?
> 
> maybe try hfsc.c, but overall there are quite a few files involved
> in queue definition and use from start to finish. or going from the
> other side start with how pfctl defines queues and follow through
> from there.
> 

Thank you, I will try (best effort as time permits), and see how far I get.. 
(probably not far ;)




Re: PF queue bandwidth limited to 32bit value

2023-09-15 Thread Stuart Henderson
On 2023/09/15 13:40, Andy Lemin wrote:
> Hi Stuart,
> 
> Seeing as it seems like everyone is too busy, and my workaround
> (not queue some flows on interfaces with queue defined) seems of no
> interest,

well, it might be, but I'm not sure if it will fit with how
queues work..

> and my current hack to use queuing on Vlan interfaces is
> a very incomplete and restrictive workaround; Would you please be
> so kind as to provide me with a starting point in the source code
> and variable names to concentrate on, where I can start tracing from
> beginning to end for changing the scale from bits to bytes?

maybe try hfsc.c, but overall there are quite a few files involved
in queue definition and use from start to finish. or going from the
other side start with how pfctl defines queues and follow through
from there.



Re: PF queue bandwidth limited to 32bit value

2023-09-14 Thread Andy Lemin
Hi Stuart,Seeing as it seems like everyone is too busy, and my workaround (not queue some flows on interfaces with queue defined) seems of no interest, and my current hack to use queuing on Vlan interfaces is a very incomplete and restrictive workaround;Would you please be so kind as to provide me with a starting point in the source code and variable names to concentrate on, where I can start tracing from beginning to end for changing the scale from bits to bytes?Thanks :)AndyOn 14 Sep 2023, at 19:34, Andrew Lemin  wrote:On Thu, Sep 14, 2023 at 7:23 PM Andrew Lemin  wrote:On Wed, Sep 13, 2023 at 8:35 PM Stuart Henderson  wrote:On 2023-09-13, Andrew Lemin  wrote:
> I have noticed another issue while trying to implement a 'prio'-only
> workaround (using only prio ordering for inter-VLAN traffic, and HSFC
> queuing for internet traffic);
> It is not possible to have internal inter-vlan traffic be solely priority
> ordered with 'set prio', as the existence of 'queue' definitions on the
> same internal vlan interfaces (required for internet flows), demands one
> leaf queue be set as 'default'. Thus forcing all inter-vlan traffic into
> the 'default' queue despite queuing not being wanted, and so
> unintentionally clamping all internal traffic to 4294M just because full
> queuing is needed for internet traffic.

If you enable queueing on an interface all traffic sent via that
interface goes via one queue or another.Yes, that is indeed the very problem. Queueing is enabled on the inside interfaces, with bandwidth values set slightly below the ISP capacities (multiple ISP links as well), so that all things work well for all internal users.However this means that inter-vlan traffic from client networks to server networks are restricted to 4294Mbps for no reason.. It would make a huge difference to be able to allow local traffic to flow without being queued/restircted. 

(also, AIUI the correct place for queues is on the physical interface
not the vlan, since that's where the bottleneck is... you can assign
traffic to a queue name as it comes in on the vlan but I believe the
actual queue definition should be on the physical iface).Hehe yes I know. Thanks for sharing though.I actually have very specific reasons for doing this (queues on the VLAN ifaces rather than phy) as there are multiple ISP connections for multiple VLANs, so the VLAN queues are set to restrict for the relevant ISP link etc.Also separate to the multiple ISPs (I wont bore you with why as it is not relevant here), the other reason for queueing on the VLANs is because it allows you to get closer to the 10Gbps figure..Ie, If you have queues on the 10Gbps PHY, you can only egress 4294Mbps to _all_ VLANs. But if you have queues per-VLAN iface, you can egress multiple times 4294Mbps on aggregate.Eg, vlans 10,11,12,13 on single mcx0 trunk. 10->11 can do 4294Mbps and 12->13 can do 4294Mbps, giving over 8Gbps egress in total on the PHY. It is dirty, but like I said, desperate for workarounds... :(  

"required for internet flows" - depends on your network layout.. the
upstream feed doesn't have to go via the same interface as inter-vlan
traffic.I'm not sure what you mean. All the internal networks/vlans are connected to local switches, and the switches have trunk to the firewall which hosts the default gateway for the VLANs and does inter-vlan routing.So all the clients go through the same VLANs/trunk/gateway for inter-vlan as they do for internet. Strict L3/4 filtering is required on inter-vlan traffic.I am honestly looking for support to recognise that this is a correct, valid and common setup, and so there is a genuine need to allow flows to not be queued on interfaces that have queues (which has many potential applications for many use cases, not just mine - so should be of interest to the developers?).Do you know why there has to be a default queue? Yes I know that traffic excluded from queues would take from the same interface the queueing is trying to manage, and potentially causes congestion. However with 10Gbps networking which is beyond common now, this does not matter when the queues are stuck at 4294MbpsDesperately trying to find workarounds that appeal.. Surely the need is a no brainer, and it is just a case of trying to encourage interest from a developer?Thanks :)



Re: PF queue bandwidth limited to 32bit value

2023-09-14 Thread Andrew Lemin
On Thu, Sep 14, 2023 at 7:23 PM Andrew Lemin  wrote:

>
>
> On Wed, Sep 13, 2023 at 8:35 PM Stuart Henderson <
> stu.li...@spacehopper.org> wrote:
>
>> On 2023-09-13, Andrew Lemin  wrote:
>> > I have noticed another issue while trying to implement a 'prio'-only
>> > workaround (using only prio ordering for inter-VLAN traffic, and HSFC
>> > queuing for internet traffic);
>> > It is not possible to have internal inter-vlan traffic be solely
>> priority
>> > ordered with 'set prio', as the existence of 'queue' definitions on the
>> > same internal vlan interfaces (required for internet flows), demands one
>> > leaf queue be set as 'default'. Thus forcing all inter-vlan traffic into
>> > the 'default' queue despite queuing not being wanted, and so
>> > unintentionally clamping all internal traffic to 4294M just because full
>> > queuing is needed for internet traffic.
>>
>> If you enable queueing on an interface all traffic sent via that
>> interface goes via one queue or another.
>>
>
> Yes, that is indeed the very problem. Queueing is enabled on the inside
> interfaces, with bandwidth values set slightly below the ISP capacities
> (multiple ISP links as well), so that all things work well for all internal
> users.
> However this means that inter-vlan traffic from client networks to server
> networks are restricted to 4294Mbps for no reason.. It would make a huge
> difference to be able to allow local traffic to flow without being
> queued/restircted.
>
>
>>
>> (also, AIUI the correct place for queues is on the physical interface
>> not the vlan, since that's where the bottleneck is... you can assign
>> traffic to a queue name as it comes in on the vlan but I believe the
>> actual queue definition should be on the physical iface).
>>
>
> Hehe yes I know. Thanks for sharing though.
> I actually have very specific reasons for doing this (queues on the VLAN
> ifaces rather than phy) as there are multiple ISP connections for multiple
> VLANs, so the VLAN queues are set to restrict for the relevant ISP link etc.
>

Also separate to the multiple ISPs (I wont bore you with why as it is not
relevant here), the other reason for queueing on the VLANs is because it
allows you to get closer to the 10Gbps figure..
Ie, If you have queues on the 10Gbps PHY, you can only egress 4294Mbps to
_all_ VLANs. But if you have queues per-VLAN iface, you can egress multiple
times 4294Mbps on aggregate.
Eg, vlans 10,11,12,13 on single mcx0 trunk. 10->11 can do 4294Mbps and
12->13 can do 4294Mbps, giving over 8Gbps egress in total on the PHY. It is
dirty, but like I said, desperate for workarounds... :(


>
>
>>
>> "required for internet flows" - depends on your network layout.. the
>> upstream feed doesn't have to go via the same interface as inter-vlan
>> traffic.
>
>
> I'm not sure what you mean. All the internal networks/vlans are connected
> to local switches, and the switches have trunk to the firewall which hosts
> the default gateway for the VLANs and does inter-vlan routing.
> So all the clients go through the same VLANs/trunk/gateway for inter-vlan
> as they do for internet. Strict L3/4 filtering is required on inter-vlan
> traffic.
> I am honestly looking for support to recognise that this is a correct,
> valid and common setup, and so there is a genuine need to allow flows to
> not be queued on interfaces that have queues (which has many potential
> applications for many use cases, not just mine - so should be of interest
> to the developers?).
>
> Do you know why there has to be a default queue? Yes I know that traffic
> excluded from queues would take from the same interface the queueing is
> trying to manage, and potentially causes congestion. However with 10Gbps
> networking which is beyond common now, this does not matter when the queues
> are stuck at 4294Mbps
>
> Desperately trying to find workarounds that appeal.. Surely the need is a
> no brainer, and it is just a case of trying to encourage interest from a
> developer?
>
> Thanks :)
>


Re: PF queue bandwidth limited to 32bit value

2023-09-14 Thread Andrew Lemin
On Wed, Sep 13, 2023 at 8:35 PM Stuart Henderson 
wrote:

> On 2023-09-13, Andrew Lemin  wrote:
> > I have noticed another issue while trying to implement a 'prio'-only
> > workaround (using only prio ordering for inter-VLAN traffic, and HSFC
> > queuing for internet traffic);
> > It is not possible to have internal inter-vlan traffic be solely priority
> > ordered with 'set prio', as the existence of 'queue' definitions on the
> > same internal vlan interfaces (required for internet flows), demands one
> > leaf queue be set as 'default'. Thus forcing all inter-vlan traffic into
> > the 'default' queue despite queuing not being wanted, and so
> > unintentionally clamping all internal traffic to 4294M just because full
> > queuing is needed for internet traffic.
>
> If you enable queueing on an interface all traffic sent via that
> interface goes via one queue or another.
>

Yes, that is indeed the very problem. Queueing is enabled on the inside
interfaces, with bandwidth values set slightly below the ISP capacities
(multiple ISP links as well), so that all things work well for all internal
users.
However this means that inter-vlan traffic from client networks to server
networks are restricted to 4294Mbps for no reason.. It would make a huge
difference to be able to allow local traffic to flow without being
queued/restircted.


>
> (also, AIUI the correct place for queues is on the physical interface
> not the vlan, since that's where the bottleneck is... you can assign
> traffic to a queue name as it comes in on the vlan but I believe the
> actual queue definition should be on the physical iface).
>

Hehe yes I know. Thanks for sharing though.
I actually have very specific reasons for doing this (queues on the VLAN
ifaces rather than phy) as there are multiple ISP connections for multiple
VLANs, so the VLAN queues are set to restrict for the relevant ISP link etc.


>
> "required for internet flows" - depends on your network layout.. the
> upstream feed doesn't have to go via the same interface as inter-vlan
> traffic.


I'm not sure what you mean. All the internal networks/vlans are connected
to local switches, and the switches have trunk to the firewall which hosts
the default gateway for the VLANs and does inter-vlan routing.
So all the clients go through the same VLANs/trunk/gateway for inter-vlan
as they do for internet. Strict L3/4 filtering is required on inter-vlan
traffic.
I am honestly looking for support to recognise that this is a correct,
valid and common setup, and so there is a genuine need to allow flows to
not be queued on interfaces that have queues (which has many potential
applications for many use cases, not just mine - so should be of interest
to the developers?).

Do you know why there has to be a default queue? Yes I know that traffic
excluded from queues would take from the same interface the queueing is
trying to manage, and potentially causes congestion. However with 10Gbps
networking which is beyond common now, this does not matter when the queues
are stuck at 4294Mbps

Desperately trying to find workarounds that appeal.. Surely the need is a
no brainer, and it is just a case of trying to encourage interest from a
developer?

Thanks :)


Re: PF queue bandwidth limited to 32bit value

2023-09-14 Thread Andrew Lemin
On Wed, Sep 13, 2023 at 8:22 PM Stuart Henderson 
wrote:

> On 2023-09-12, Andrew Lemin  wrote:
> > A, thats clever! Having bandwidth queues up to 34,352M would
> definitely
> > provide runway for the next decade :)
> >
> > Do you think your idea is worth circulating on tech@ for further
> > discussion? Queueing at bps resolution is rather redundant nowadays, even
> > on the very slowest links.
>
> tech@ is more for diffs or technical questions rather than not-fleshed-out
> quick ideas. Doing this would solve some problems with the "just change it
> to 64-bit" mooted on the freebsd-pf list (not least with 32-bit archs),
> but would still need finding all the places where the bandwidth values are
> used and making sure they're updated to cope.
>
>
Yes good point :) I am not in a position to undertake this myself at the
moment.
If none of the generous developers feel included to do this despite the
broad value, I might have a go myself at some point (probably not able
until next year sadly).

"just change it to 64-bit" mooted on the freebsd-pf list - I have been
unable to find this conversation. Do you have a link?


>
> --
> Please keep replies on the mailing list.
>
>


Re: PF queue bandwidth limited to 32bit value

2023-09-13 Thread Stuart Henderson
On 2023-09-13, Andrew Lemin  wrote:
> I have noticed another issue while trying to implement a 'prio'-only
> workaround (using only prio ordering for inter-VLAN traffic, and HSFC
> queuing for internet traffic);
> It is not possible to have internal inter-vlan traffic be solely priority
> ordered with 'set prio', as the existence of 'queue' definitions on the
> same internal vlan interfaces (required for internet flows), demands one
> leaf queue be set as 'default'. Thus forcing all inter-vlan traffic into
> the 'default' queue despite queuing not being wanted, and so
> unintentionally clamping all internal traffic to 4294M just because full
> queuing is needed for internet traffic.

If you enable queueing on an interface all traffic sent via that
interface goes via one queue or another.

(also, AIUI the correct place for queues is on the physical interface
not the vlan, since that's where the bottleneck is... you can assign
traffic to a queue name as it comes in on the vlan but I believe the
actual queue definition should be on the physical iface).

"required for internet flows" - depends on your network layout.. the
upstream feed doesn't have to go via the same interface as inter-vlan
traffic.
 



Re: PF queue bandwidth limited to 32bit value

2023-09-13 Thread Stuart Henderson
On 2023-09-12, Andrew Lemin  wrote:
> A, thats clever! Having bandwidth queues up to 34,352M would definitely
> provide runway for the next decade :)
>
> Do you think your idea is worth circulating on tech@ for further
> discussion? Queueing at bps resolution is rather redundant nowadays, even
> on the very slowest links.

tech@ is more for diffs or technical questions rather than not-fleshed-out
quick ideas. Doing this would solve some problems with the "just change it
to 64-bit" mooted on the freebsd-pf list (not least with 32-bit archs),
but would still need finding all the places where the bandwidth values are
used and making sure they're updated to cope.


-- 
Please keep replies on the mailing list.



Re: PF queue bandwidth limited to 32bit value

2023-09-12 Thread Andrew Lemin
On Wed, Sep 13, 2023 at 3:43 AM Andrew Lemin  wrote:

> Hi Stuart.
>
> On Wed, Sep 13, 2023 at 12:25 AM Stuart Henderson <
> stu.li...@spacehopper.org> wrote:
>
>> On 2023-09-12, Andrew Lemin  wrote:
>> > Hi all,
>> > Hope this finds you well.
>> >
>> > I have discovered that PF's queueing is still limited to 32bit bandwidth
>> > values.
>> >
>> > I don't know if this is a regression or not.
>>
>> It's not a regression, it has been capped at 32 bits afaik forever
>> (certainly was like that when the separate classification via altq.conf
>> was merged into PF config, in OpenBSD 3.3).
>>
>
> Ah ok, it was talked about so much I thought it was part of it. Thanks for
> clarifying.
>
>
>>
>> >  I am sure one of the
>> > objectives of the ALTQ rewrite into the new queuing system we have in
>> > OpenBSD today, was to allow bandwidth values larger than 4294M. Maybe I
>> am
>> > imagining it..
>>
>> I don't recall that though there were some hopes expressed by
>> non-developers.
>>
>
> Haha, it is definitely still wanted and needed. prio-only based ordering
> is too limited
>

I have noticed another issue while trying to implement a 'prio'-only
workaround (using only prio ordering for inter-VLAN traffic, and HSFC
queuing for internet traffic);
It is not possible to have internal inter-vlan traffic be solely priority
ordered with 'set prio', as the existence of 'queue' definitions on the
same internal vlan interfaces (required for internet flows), demands one
leaf queue be set as 'default'. Thus forcing all inter-vlan traffic into
the 'default' queue despite queuing not being wanted, and so
unintentionally clamping all internal traffic to 4294M just because full
queuing is needed for internet traffic.
In fact 'prio' is irrelevant, as with or without 'prio' because queue's are
required for internet traffic, all internal traffic becomes bound by the
'default' HSFC queue.

So I would propose that the mandate on the 'default' keyword is relaxed (or
a new keyword is provided for match/pass rules to force flows to not be
queued), and/or implement the uint32 scale in bytes, instead of bits?

I personally believe both are valid and needed?


>
>
>>
>> > Anyway, I am trying to use OpenBSD PF to perform/filter Inter-VLAN
>> routing
>> > with 10Gbps trunks, and I cannot set the queue bandwidth higher than a
>> > 32bit value?
>> >
>> > Setting the bandwidth value to 4295M results in a value overflow where
>> > 'systat queues' shows it wrapped and starts from 0 again. And traffic is
>> > indeed restricted to such values, so does not appear to be just a
>> cosmetic
>> > 'systat queues' issue.
>> >
>> > I am sure this must be a bug/regression,
>>
>> I'd say a not-implemented feature (and I have a feeling it is not
>> going to be all that simple a thing to implement - though changing
>> scales so the uint32 carries bytes instead of bits per second might
>> not be _too_ terrible).
>>
>
> Following the great work to SMP unlock in the VLAN interface, and recent
> NIC optimisations (offloading and interrupt handling) in various drivers,
> you can now push packet filtered 10Gbps with modern CPUs without breaking a
> sweat..
>
> A, thats clever! Having bandwidth queues up to 34,352M would
> definitely provide runway for the next decade :)
>
> Do you think your idea is worth circulating on tech@ for further
> discussion? Queueing at bps resolution is rather redundant nowadays, even
> on the very slowest links.
>
>
>> >  10Gbps on OpenBSD is trivial
>> and
>> > common nowadays..
>>
>> While using interfaces with 10Gbps link speed on OpenBSD is trivial,
>> actually pushing that much traffic (particularly with more complex
>> processing e.g. things like bandwidth controls, and particularly with
>> smaller packet sizes) not so much.
>>
>>
>> --
>> Please keep replies on the mailing list.
>>
>>
Thanks again, Andy.


Re: PF queue bandwidth limited to 32bit value

2023-09-12 Thread Andrew Lemin
Hi Stuart.

On Wed, Sep 13, 2023 at 12:25 AM Stuart Henderson 
wrote:

> On 2023-09-12, Andrew Lemin  wrote:
> > Hi all,
> > Hope this finds you well.
> >
> > I have discovered that PF's queueing is still limited to 32bit bandwidth
> > values.
> >
> > I don't know if this is a regression or not.
>
> It's not a regression, it has been capped at 32 bits afaik forever
> (certainly was like that when the separate classification via altq.conf
> was merged into PF config, in OpenBSD 3.3).
>

Ah ok, it was talked about so much I thought it was part of it. Thanks for
clarifying.


>
> >  I am sure one of the
> > objectives of the ALTQ rewrite into the new queuing system we have in
> > OpenBSD today, was to allow bandwidth values larger than 4294M. Maybe I
> am
> > imagining it..
>
> I don't recall that though there were some hopes expressed by
> non-developers.
>

Haha, it is definitely still wanted and needed. prio-only based ordering is
too limited


>
> > Anyway, I am trying to use OpenBSD PF to perform/filter Inter-VLAN
> routing
> > with 10Gbps trunks, and I cannot set the queue bandwidth higher than a
> > 32bit value?
> >
> > Setting the bandwidth value to 4295M results in a value overflow where
> > 'systat queues' shows it wrapped and starts from 0 again. And traffic is
> > indeed restricted to such values, so does not appear to be just a
> cosmetic
> > 'systat queues' issue.
> >
> > I am sure this must be a bug/regression,
>
> I'd say a not-implemented feature (and I have a feeling it is not
> going to be all that simple a thing to implement - though changing
> scales so the uint32 carries bytes instead of bits per second might
> not be _too_ terrible).
>

Following the great work to SMP unlock in the VLAN interface, and recent
NIC optimisations (offloading and interrupt handling) in various drivers,
you can now push packet filtered 10Gbps with modern CPUs without breaking a
sweat..

A, thats clever! Having bandwidth queues up to 34,352M would definitely
provide runway for the next decade :)

Do you think your idea is worth circulating on tech@ for further
discussion? Queueing at bps resolution is rather redundant nowadays, even
on the very slowest links.


> >  10Gbps on OpenBSD is trivial and
> > common nowadays..
>
> While using interfaces with 10Gbps link speed on OpenBSD is trivial,
> actually pushing that much traffic (particularly with more complex
> processing e.g. things like bandwidth controls, and particularly with
> smaller packet sizes) not so much.
>
>
> --
> Please keep replies on the mailing list.
>
>


Re: PF queue bandwidth limited to 32bit value

2023-09-12 Thread Stuart Henderson
On 2023-09-12, Andrew Lemin  wrote:
> Hi all,
> Hope this finds you well.
>
> I have discovered that PF's queueing is still limited to 32bit bandwidth
> values.
>
> I don't know if this is a regression or not.

It's not a regression, it has been capped at 32 bits afaik forever
(certainly was like that when the separate classification via altq.conf
was merged into PF config, in OpenBSD 3.3).

>  I am sure one of the
> objectives of the ALTQ rewrite into the new queuing system we have in
> OpenBSD today, was to allow bandwidth values larger than 4294M. Maybe I am
> imagining it..

I don't recall that though there were some hopes expressed by
non-developers.

> Anyway, I am trying to use OpenBSD PF to perform/filter Inter-VLAN routing
> with 10Gbps trunks, and I cannot set the queue bandwidth higher than a
> 32bit value?
>
> Setting the bandwidth value to 4295M results in a value overflow where
> 'systat queues' shows it wrapped and starts from 0 again. And traffic is
> indeed restricted to such values, so does not appear to be just a cosmetic
> 'systat queues' issue.
>
> I am sure this must be a bug/regression,

I'd say a not-implemented feature (and I have a feeling it is not
going to be all that simple a thing to implement - though changing
scales so the uint32 carries bytes instead of bits per second might
not be _too_ terrible).

>  10Gbps on OpenBSD is trivial and
> common nowadays..

While using interfaces with 10Gbps link speed on OpenBSD is trivial,
actually pushing that much traffic (particularly with more complex
processing e.g. things like bandwidth controls, and particularly with
smaller packet sizes) not so much.


-- 
Please keep replies on the mailing list.



PF queue bandwidth limited to 32bit value

2023-09-11 Thread Andrew Lemin
Hi all,
Hope this finds you well.

I have discovered that PF's queueing is still limited to 32bit bandwidth
values.

I don't know if this is a regression or not. I am sure one of the
objectives of the ALTQ rewrite into the new queuing system we have in
OpenBSD today, was to allow bandwidth values larger than 4294M. Maybe I am
imagining it..

Anyway, I am trying to use OpenBSD PF to perform/filter Inter-VLAN routing
with 10Gbps trunks, and I cannot set the queue bandwidth higher than a
32bit value?

Setting the bandwidth value to 4295M results in a value overflow where
'systat queues' shows it wrapped and starts from 0 again. And traffic is
indeed restricted to such values, so does not appear to be just a cosmetic
'systat queues' issue.

I am sure this must be a bug/regression, 10Gbps on OpenBSD is trivial and
common nowadays..

Tested on OpenBSD 7.3
Thanks for checking my sanity :)
Andy.


Re: pf queue on packets with state

2021-02-02 Thread michal . lyszczek
Hi Stuart, thank you for your clear reply
On 2021-02-02 22:41:49, Stuart Henderson wrote:
> Whichever rule creates state for the packets that you want to send
> to a queue should have the queue assignment. The queue name is attached
> to the PF state; when the packet is transmitted outbound it will use
> the queue of that name on that interface.

Yup, that was is. Instead of doing

  match out on $i_lan all set queue q_lte_in_http set prio 0

I did it "the opoosite" way

  match in on $i_lan all set queue q_lte_in_http set prio 0

Also in my real rules I've changed "from port $p_http" to "to port $p_http",
and it started to match queues as expected. Thank you!

I did read something around these lines on the openbsd forum, that queues
are tied to input state, but I was just trying to do "pass in $i_lan".
It never occured to me to try do 'set queue' during 'in' part. I've read
about queueing in pf.conf(5) and nothing there hints this also.

> You don't want queue names dealing with in/out/interface. Just the type
> of traffic / queue policy / whatever. For example "user1", "user2", ..
> or "http", "dns", .. or "high/med/low" or something.
> 
Yes, I am indeed queueing by service dns/ssh/games, but my firewall has
multiple WAN interfaces with different speed so I also must specify this.
In examples I wanted to keep things to bare minimum so people do not have
to waste time thinking what mess I have in my pf.conf :D

> I find it easier to make the match rule setting the queue quite wide,
> then do anything more complex (IP/port restrictions etc) in pass/block
> rules.

> You should use some variant of "block" covering all traffic as your
> first rule ("block" / "block log" etc) so that packets are not allowed
> to pass unless they create state. This makes it easier to figure out
> the queues, and prevents state tracking getting messed up with TCP (the
> TCP state must be created from a SYN packet not an intermediate packet
> otherwise it doesn't know what the window-scaling value is, which will
> cause longer lasting or fast connections to get dropped incorrectly).

That's what I think too, I use pf in "block by default" and have rules
to block everything at top. And I intend to queue packets by service port
or IP.

> > Is there any way to limit ingress on some ips/ports? I'd like to limit
> > greedy apps like youtube or netflix from taking all the bandwidth.
> 
> Good luck finding the relevant IPs for these ;) You might like to play
> with "burst" and see if you can do something that way. (e.g. standard
> bandwidth is slower, but allow a fast initial burst). But you'll probably
> need to do that with separate queues per IP and it gets to be a pain.

I found some sites with ip ranges for netflix and youtube, they are quite
broad, but it's better than crippled network.


Thank you again for clarification and explaining this to me.

-- 
.-.---.-.--.
| Michal Lyszczek | Embedded C, Linux |   Company Address   |  .-. open source |
| +48 727 564 419 | Software Engineer | Leszczynskiego 4/29 |  oo|  supporter  |
| https://bofc.pl `.--: 50-078 Wroclaw, Pol | /`'\  &  |
| GPG FF1EBFE7E3A974B1 | Bits of Code | NIP:  813 349 58 78 |(\_;/) programer  |
`--^--^-^--'


signature.asc
Description: PGP signature


Re: pf queue on packets with state

2021-02-02 Thread Stuart Henderson
On 2021-02-02, michal.lyszc...@bofc.pl  wrote:
> --syjteu3hgkkj7xpe
> Content-Type: text/plain; charset=us-ascii
> Content-Disposition: inline
> Content-Transfer-Encoding: 7bit
>
> Hi, I'm trying to setup queues on my LTE interface. This machine is firewall
> machine with two interfaces: wan and lan. Egress traffic is queueing without
> a problem. Rules like
>
>   match out on $i_wan proto {tcp udp} to any port $p_dns set queue 
> q_lte_out_dns set prio 6
>
> work as intended and I can see that rules are being matched in systat queue
> and rules.
>
> Problem is with ingress packets. Yes, I know people say it makes no sense to
> do it, but I belive it can work for TCP traffic. The slower program is
> receiving data, the slower it will ACK, the slower server will be sending
> data, and there should be more space for other packets.
>
> Anyway, it does not seem to work for me. I try the most basic rules:
>
>   queue q_lte_in_root on $i_lan bandwidth 20M max 20M qlimit 50
>
> This works as intended, speedtests do indeed show my speed is more or less
> 20Mbit. Now I add 2 more queues, default and for http
>
>   queue q_lte_in_std  parent q_lte_in_root bandwidth 512K default qlimit 50
>   queue q_lte_in_http parent q_lte_in_root bandwidth 512K qlimit 50
>
> And I create match rule:
>
>   match out on $i_lan all set queue q_lte_in_http set prio 0

There is a knack to this that pretty much everybody doing queue config
takes a while to figure out - the queue name with "in" is a giveaway
that you didn't find it yet.

Whichever rule creates state for the packets that you want to send
to a queue should have the queue assignment. The queue name is attached
to the PF state; when the packet is transmitted outbound it will use
the queue of that name on that interface.

You don't want queue names dealing with in/out/interface. Just the type
of traffic / queue policy / whatever. For example "user1", "user2", ..
or "http", "dns", .. or "high/med/low" or something.

Simplest match rules are like

match from 192.0.2.1 queue user1
match to 192.0.2.1 queue user1

or

match proto tcp to port 80 queue http

(plus the pass rule to allow it)

I find it easier to make the match rule setting the queue quite wide,
then do anything more complex (IP/port restrictions etc) in pass/block
rules.

> And this rule is matched only by a handful of packets. systat queue
> shows that majority of packets go through q_lte_in_std, and only some
> of the packets go through q_lte_in_http. systat rules also shows only
> some of the packets are being matched by that rule.
>
>
> I don't know, it looks like only packets without state match "match"
> rule and are being queued properly? I know filtering will be skipped

You should use some variant of "block" covering all traffic as your
first rule ("block" / "block log" etc) so that packets are not allowed
to pass unless they create state. This makes it easier to figure out
the queues, and prevents state tracking getting messed up with TCP (the
TCP state must be created from a SYN packet not an intermediate packet
otherwise it doesn't know what the window-scaling value is, which will
cause longer lasting or fast connections to get dropped incorrectly).

> for packets that have state but queueing is not skipped. So why can't
> I queue packets ingressing on LTE that are being egressed on LAN
> interface?
>
> Is there any way to limit ingress on some ips/ports? I'd like to limit
> greedy apps like youtube or netflix from taking all the bandwidth.

Good luck finding the relevant IPs for these ;) You might like to play
with "burst" and see if you can do something that way. (e.g. standard
bandwidth is slower, but allow a fast initial burst). But you'll probably
need to do that with separate queues per IP and it gets to be a pain.

> I read pf.conf man and searched the whole net but I couldn't find
> answer to my question. I think I could make it work if I made pf
> stateless by default? Performance is not an issue here, machine
> can take it, but I couldn't find a way to do stateless by default.

stateful is the way to go, it just needs the queue assigned in the right
place.

> Any ideas? Maybe I didn't read something carefully enough?
>
>



pf queue on packets with state

2021-02-02 Thread michal . lyszczek
Hi, I'm trying to setup queues on my LTE interface. This machine is firewall
machine with two interfaces: wan and lan. Egress traffic is queueing without
a problem. Rules like

  match out on $i_wan proto {tcp udp} to any port $p_dns set queue 
q_lte_out_dns set prio 6

work as intended and I can see that rules are being matched in systat queue
and rules.

Problem is with ingress packets. Yes, I know people say it makes no sense to
do it, but I belive it can work for TCP traffic. The slower program is
receiving data, the slower it will ACK, the slower server will be sending
data, and there should be more space for other packets.

Anyway, it does not seem to work for me. I try the most basic rules:

  queue q_lte_in_root on $i_lan bandwidth 20M max 20M qlimit 50

This works as intended, speedtests do indeed show my speed is more or less
20Mbit. Now I add 2 more queues, default and for http

  queue q_lte_in_std  parent q_lte_in_root bandwidth 512K default qlimit 50
  queue q_lte_in_http parent q_lte_in_root bandwidth 512K qlimit 50

And I create match rule:

  match out on $i_lan all set queue q_lte_in_http set prio 0

And this rule is matched only by a handful of packets. systat queue
shows that majority of packets go through q_lte_in_std, and only some
of the packets go through q_lte_in_http. systat rules also shows only
some of the packets are being matched by that rule.


I don't know, it looks like only packets without state match "match"
rule and are being queued properly? I know filtering will be skipped
for packets that have state but queueing is not skipped. So why can't
I queue packets ingressing on LTE that are being egressed on LAN
interface?

Is there any way to limit ingress on some ips/ports? I'd like to limit
greedy apps like youtube or netflix from taking all the bandwidth.

I read pf.conf man and searched the whole net but I couldn't find
answer to my question. I think I could make it work if I made pf
stateless by default? Performance is not an issue here, machine
can take it, but I couldn't find a way to do stateless by default.

Any ideas? Maybe I didn't read something carefully enough?


-- 
.-.---.-.--.
| Michal Lyszczek | Embedded C, Linux |   Company Address   |  .-. open source |
| +48 727 564 419 | Software Engineer | Leszczynskiego 4/29 |  oo|  supporter  |
| https://bofc.pl `.--: 50-078 Wroclaw, Pol | /`'\  &  |
| GPG FF1EBFE7E3A974B1 | Bits of Code | NIP:  813 349 58 78 |(\_;/) programer  |
`--^--^-^--'


signature.asc
Description: PGP signature


Re: pf queue definition: bandwidth resolution problem

2017-05-13 Thread Carl Mascott
One more thing:
The BW column of "systat queues" has the same truncation error.
I'm guessing that "systat queues" is running "pfctl -vsqueue" periodically, but 
if that's not the case then the same fix is needed in systat.




On Sat, 5/13/17, Carl Mascott  wrote:

 Subject: Re: pf queue definition: bandwidth resolution problem
 To: "Mike Belopuhov" 
 Cc: misc@openbsd.org
 Date: Saturday, May 13, 2017, 4:55 PM
 
 I forgot to ask: How will I know when there's
 a snapshot with a fixed pfctl binary?
 Any problem with dropping the new pfctl
 binary into my 6.1-stable (i386) system?
 
 P.S. I'm new to OpenBSD.
 
 
 ----
 On Sat, 5/13/17, Carl Mascott 
 wrote:
 
  Subject: Re: pf queue definition:
 bandwidth resolution problem
  To: "Mike Belopuhov" 
  Cc: misc@openbsd.org,
 t...@openbsd.org
  Date: Saturday, May 13, 2017, 4:21 PM
  
  First, just to be safe, I did a
 bandwidth
  test with only one queue, max
 bandwidth 1999K.
  pf is fine: measured speed was about
  2M.
  
  Just eyeballing it, I don't see
  anything wrong with your patch, but I
 have no way to test
  it: I'm not set up to build from
 source.
  If I understand correctly you have
  already tested it.
  In that case, I guess it's OK to
 commit
  it (or however the process works..).
  
  Gee, this was easy. Thanks!
  
  
  
 
 --------
  On Sat, 5/13/17, Mike Belopuhov 
  wrote:
  
   Subject: Re: pf queue
 definition:
  bandwidth resolution problem
   To: "Carl Mascott" 
   Cc: misc@openbsd.org,
  t...@openbsd.org
   Date: Saturday, May 13, 2017,
 3:23 PM
   
   Ah, I see what you mean. 
 Indeed, we
  have to
   make sure the remainder
   is 0 when we're
   displaying the bandwidth.  I
 think
  the diff below is
   what we want.  Works fine here,
 any
  OKs?
   
   On Sat, May 13, 2017 at 18:34
 +,
  Carl
   Mascott wrote:
   > You missed the point. I
   didn't do any testing. I just
 looked
  at the output of
   "pfctl -squeue" (correction: In
 the
  original post
   I wrote "pfctl -srules") and
 noticed
  that the
   assigned queue bandwidths
 reported by
  pfctl were in some
   cases much different than the
  specified queue bandwidth
   parameters in pf.conf. To put
 it
  another way, the readback
   of defined bandwidth does not
 match
  the definition.
   > 
   > From observation it
   looks like specified bandwidths
 in
  pf.conf with 4 or more
   digits followed by 'K' have the
 3
  LSD's dropped
   and 'K' changed to 'M' in the
 output
  of
   "pfctl -squeue." Example: 1800K
 in
  pf.conf becomes
   1M in the output of "pfctl
 -squeue" --
  a very
   significant difference.
   > 
   > It remains to be
 determined
  whether the
   fault is in the setting of
 bandwidth
  by pf or in the
   reporting of bandwidth by
 pfctl.
  Actual tests with a simple
   queue, as you suggested, could
  determine whether pf is
   enforcing the correct bandwidth
  values. To make it easiest
   to see an error the optimum leaf
 queue
  max bandwidth to use
   is 1999K. Then see whether the
  measured bandwidth is ~2M or
   ~1M.
   > 
   > When I have
   time I'll do a simple test.
   > 
   > 
   > 
   >
  
 
 
   > On Sat, 5/13/17, Mike
 Belopuhov
  
   wrote:
   > 
   >  Subject:
   Re: pf queue definition:
 bandwidth
  resolution problem
   >  To: "Carl Mascott" 
   >  Cc: misc@openbsd.org
   >  Date: Saturday, May 13,
 2017,
  12:02
   PM
   >  
   >  On Tue,
   May 09, 2017 at 19:47 +,
 Carl
   > 
   Mascott wrote:
   >  > Intel Atom
   D2500
   >  1.66GHz
   > 
   > OpenBSD i386 v6.1-stable
   >  >
   
   >  > I can't get pf
   >  to give me the queue
 bandwidths
  that I
   specify in
   >  pf.conf.
   >  > 
   >  >
   >  pf.conf:
   >  > 
   >  > queue
   >  rootq
   on $ext_if bandwidth 9M max 9M
 qlimit
  100
   >  >         queue
 qdef
  parent
   rootq
   >  bandwidth 3650K default
   >  >        
   > 
   queue qrtp parent rootq
 bandwidth 350K
  min 350K burst
   700K
   >  for 200ms
   > 
   >         queue qweb
 parent
   > 
   rootq bandwidth 4M
   >  >        
   queue
   >  qpri parent rootq
 bandwidth
   900K min 50K burst 1800K for
   >  200ms
   >  >         queue
 qdns
  parent
   >  rootq bandwidth 100K min
 10K
  burst 200K
   for 1000ms
   >  > 
   >  > output of pfctl
   >  -srules:
   >  > 
   >  > queue
   >  rootq
   on bge0 bandwidth 9M, max 9M
 qlimit
  100
   >  > queue qdef parent
 rootq
  bandwidth
   3M
   >  default qlimit 50
   >  > queue

Re: pf queue definition: bandwidth resolution problem

2017-05-13 Thread Carl Mascott
I forgot to ask: How will I know when there's a snapshot with a fixed pfctl 
binary?
Any problem with dropping the new pfctl binary into my 6.1-stable (i386) system?

P.S. I'm new to OpenBSD.



On Sat, 5/13/17, Carl Mascott  wrote:

 Subject: Re: pf queue definition: bandwidth resolution problem
 To: "Mike Belopuhov" 
 Cc: misc@openbsd.org, t...@openbsd.org
 Date: Saturday, May 13, 2017, 4:21 PM
 
 First, just to be safe, I did a bandwidth
 test with only one queue, max bandwidth 1999K.
 pf is fine: measured speed was about
 2M.
 
 Just eyeballing it, I don't see
 anything wrong with your patch, but I have no way to test
 it: I'm not set up to build from source.
 If I understand correctly you have
 already tested it.
 In that case, I guess it's OK to commit
 it (or however the process works..).
 
 Gee, this was easy. Thanks!
 
 
 
 
 On Sat, 5/13/17, Mike Belopuhov 
 wrote:
 
  Subject: Re: pf queue definition:
 bandwidth resolution problem
  To: "Carl Mascott" 
  Cc: misc@openbsd.org,
 t...@openbsd.org
  Date: Saturday, May 13, 2017, 3:23 PM
  
  Ah, I see what you mean.  Indeed, we
 have to
  make sure the remainder
  is 0 when we're
  displaying the bandwidth.  I think
 the diff below is
  what we want.  Works fine here, any
 OKs?
  
  On Sat, May 13, 2017 at 18:34 +,
 Carl
  Mascott wrote:
  > You missed the point. I
  didn't do any testing. I just looked
 at the output of
  "pfctl -squeue" (correction: In the
 original post
  I wrote "pfctl -srules") and noticed
 that the
  assigned queue bandwidths reported by
 pfctl were in some
  cases much different than the
 specified queue bandwidth
  parameters in pf.conf. To put it
 another way, the readback
  of defined bandwidth does not match
 the definition.
  > 
  > From observation it
  looks like specified bandwidths in
 pf.conf with 4 or more
  digits followed by 'K' have the 3
 LSD's dropped
  and 'K' changed to 'M' in the output
 of
  "pfctl -squeue." Example: 1800K in
 pf.conf becomes
  1M in the output of "pfctl -squeue" --
 a very
  significant difference.
  > 
  > It remains to be determined
 whether the
  fault is in the setting of bandwidth
 by pf or in the
  reporting of bandwidth by pfctl.
 Actual tests with a simple
  queue, as you suggested, could
 determine whether pf is
  enforcing the correct bandwidth
 values. To make it easiest
  to see an error the optimum leaf queue
 max bandwidth to use
  is 1999K. Then see whether the
 measured bandwidth is ~2M or
  ~1M.
  > 
  > When I have
  time I'll do a simple test.
  > 
  > 
  > 
  >
 
 
  > On Sat, 5/13/17, Mike Belopuhov
 
  wrote:
  > 
  >  Subject:
  Re: pf queue definition: bandwidth
 resolution problem
  >  To: "Carl Mascott" 
  >  Cc: misc@openbsd.org
  >  Date: Saturday, May 13, 2017,
 12:02
  PM
  >  
  >  On Tue,
  May 09, 2017 at 19:47 +, Carl
  > 
  Mascott wrote:
  >  > Intel Atom
  D2500
  >  1.66GHz
  > 
  > OpenBSD i386 v6.1-stable
  >  >
  
  >  > I can't get pf
  >  to give me the queue bandwidths
 that I
  specify in
  >  pf.conf.
  >  > 
  >  >
  >  pf.conf:
  >  > 
  >  > queue
  >  rootq
  on $ext_if bandwidth 9M max 9M qlimit
 100
  >  >         queue qdef
 parent
  rootq
  >  bandwidth 3650K default
  >  >        
  > 
  queue qrtp parent rootq bandwidth 350K
 min 350K burst
  700K
  >  for 200ms
  > 
  >         queue qweb parent
  > 
  rootq bandwidth 4M
  >  >        
  queue
  >  qpri parent rootq bandwidth
  900K min 50K burst 1800K for
  >  200ms
  >  >         queue qdns
 parent
  >  rootq bandwidth 100K min 10K
 burst 200K
  for 1000ms
  >  > 
  >  > output of pfctl
  >  -srules:
  >  > 
  >  > queue
  >  rootq
  on bge0 bandwidth 9M, max 9M qlimit
 100
  >  > queue qdef parent rootq
 bandwidth
  3M
  >  default qlimit 50
  >  > queue qrtp parent
  >  rootq bandwidth 350K, min 350K
 burst
  700K for 200ms qlimit
  >  50
  >  > queue qweb parent rootq
 bandwidth
  4M
  >  qlimit 50
  > 
  > queue qpri parent rootq
  > 
  bandwidth 900K, min 50K burst 1M for
 200ms qlimit 50
  >  > queue qdns parent rootq
 bandwidth
  100K,
  >  min 10K burst 200K for 1000ms
  qlimit 50
  >  >
  > 
  
  >  > Discrepancies in the
 above:
  >  > 
  >  >   
           
  >     defined     
     actual
  >  >     
  >             -- 
       
  -
  >  > qdef BW   3650K   
        3M
  >  > qpri burst  1800K 
          1M
  >  > 
  >  > It looks like for
  >  anything

Re: pf queue definition: bandwidth resolution problem

2017-05-13 Thread Carl Mascott
First, just to be safe, I did a bandwidth test with only one queue, max 
bandwidth 1999K.
pf is fine: measured speed was about 2M.

Just eyeballing it, I don't see anything wrong with your patch, but I have no 
way to test it: I'm not set up to build from source.
If I understand correctly you have already tested it.
In that case, I guess it's OK to commit it (or however the process works..).

Gee, this was easy. Thanks!




On Sat, 5/13/17, Mike Belopuhov  wrote:

 Subject: Re: pf queue definition: bandwidth resolution problem
 To: "Carl Mascott" 
 Cc: misc@openbsd.org, t...@openbsd.org
 Date: Saturday, May 13, 2017, 3:23 PM
 
 Ah, I see what you mean.  Indeed, we have to
 make sure the remainder
 is 0 when we're
 displaying the bandwidth.  I think the diff below is
 what we want.  Works fine here, any OKs?
 
 On Sat, May 13, 2017 at 18:34 +, Carl
 Mascott wrote:
 > You missed the point. I
 didn't do any testing. I just looked at the output of
 "pfctl -squeue" (correction: In the original post
 I wrote "pfctl -srules") and noticed that the
 assigned queue bandwidths reported by pfctl were in some
 cases much different than the specified queue bandwidth
 parameters in pf.conf. To put it another way, the readback
 of defined bandwidth does not match the definition.
 > 
 > From observation it
 looks like specified bandwidths in pf.conf with 4 or more
 digits followed by 'K' have the 3 LSD's dropped
 and 'K' changed to 'M' in the output of
 "pfctl -squeue." Example: 1800K in pf.conf becomes
 1M in the output of "pfctl -squeue" -- a very
 significant difference.
 > 
 > It remains to be determined whether the
 fault is in the setting of bandwidth by pf or in the
 reporting of bandwidth by pfctl. Actual tests with a simple
 queue, as you suggested, could determine whether pf is
 enforcing the correct bandwidth values. To make it easiest
 to see an error the optimum leaf queue max bandwidth to use
 is 1999K. Then see whether the measured bandwidth is ~2M or
 ~1M.
 > 
 > When I have
 time I'll do a simple test.
 > 
 > 
 > 
 >
 
 > On Sat, 5/13/17, Mike Belopuhov 
 wrote:
 > 
 >  Subject:
 Re: pf queue definition: bandwidth resolution problem
 >  To: "Carl Mascott" 
 >  Cc: misc@openbsd.org
 >  Date: Saturday, May 13, 2017, 12:02
 PM
 >  
 >  On Tue,
 May 09, 2017 at 19:47 +, Carl
 > 
 Mascott wrote:
 >  > Intel Atom
 D2500
 >  1.66GHz
 > 
 > OpenBSD i386 v6.1-stable
 >  >
 
 >  > I can't get pf
 >  to give me the queue bandwidths that I
 specify in
 >  pf.conf.
 >  > 
 >  >
 >  pf.conf:
 >  > 
 >  > queue
 >  rootq
 on $ext_if bandwidth 9M max 9M qlimit 100
 >  >         queue qdef parent
 rootq
 >  bandwidth 3650K default
 >  >        
 > 
 queue qrtp parent rootq bandwidth 350K min 350K burst
 700K
 >  for 200ms
 > 
 >         queue qweb parent
 > 
 rootq bandwidth 4M
 >  >        
 queue
 >  qpri parent rootq bandwidth
 900K min 50K burst 1800K for
 >  200ms
 >  >         queue qdns parent
 >  rootq bandwidth 100K min 10K burst 200K
 for 1000ms
 >  > 
 >  > output of pfctl
 >  -srules:
 >  > 
 >  > queue
 >  rootq
 on bge0 bandwidth 9M, max 9M qlimit 100
 >  > queue qdef parent rootq bandwidth
 3M
 >  default qlimit 50
 >  > queue qrtp parent
 >  rootq bandwidth 350K, min 350K burst
 700K for 200ms qlimit
 >  50
 >  > queue qweb parent rootq bandwidth
 4M
 >  qlimit 50
 > 
 > queue qpri parent rootq
 > 
 bandwidth 900K, min 50K burst 1M for 200ms qlimit 50
 >  > queue qdns parent rootq bandwidth
 100K,
 >  min 10K burst 200K for 1000ms
 qlimit 50
 >  >
 > 
 
 >  > Discrepancies in the above:
 >  > 
 >  >   
          
 >     defined     
    actual
 >  >     
 >             --        
 -
 >  > qdef BW   3650K   
       3M
 >  > qpri burst  1800K 
         1M
 >  > 
 >  > It looks like for
 >  anything specified as abcdK the result
 is aM, i.e., for any
 >  bandwidth >=
 1000K the resulting bandwidth is
 > 
 truncated (not rounded) to M, where
 >   = most significant digit.
 Any bandwidth
 >  < 1000K works
 correctly.
 >  > 
 >  > Is this a bug, a misfeature, or
 a
 >  feature?
 > 
 > Thanks!
 >  >
 >  
 
 Index:
 sbin/pfctl/pfctl_parser.c
 ===
 RCS file:
 /home/cvs/src/sbin/pfctl/pfctl_parser.c,v
 retrieving revision 1.309
 diff
 -u -p -r1.309 pfctl_parser.c
 ---
 sbin/pfctl/pfctl_parser.c    26 Oct 2016 14:15:59
 -    1.309
 +++
 sbin/pfctl/pfctl_parser.c    13 May 2017 19:18:19
 -
 @@ -1177,7 +1177,7 @@
 print_bwspec(const char *prefix, struct 
 
         printf("%s%u%%", prefix,
 bw->percent);
      else if
 (bw->absolute) {
          rate =
 bw->absolute;
 -        for (i = 0;
 rate >= 1000 && i <= 3; i++)
 +        for (i = 0; rate >= 1000
 && i <= 3 && (rate % 1000 == 0); i++)
              rate /= 1000;
          printf("%s%u%c",
 prefix, rate, unit[i]);
      }
 
 



Re: pf queue definition: bandwidth resolution problem

2017-05-13 Thread Mike Belopuhov
Ah, I see what you mean.  Indeed, we have to make sure the remainder
is 0 when we're displaying the bandwidth.  I think the diff below is
what we want.  Works fine here, any OKs?

On Sat, May 13, 2017 at 18:34 +, Carl Mascott wrote:
> You missed the point. I didn't do any testing. I just looked at the output of 
> "pfctl -squeue" (correction: In the original post I wrote "pfctl -srules") 
> and noticed that the assigned queue bandwidths reported by pfctl were in some 
> cases much different than the specified queue bandwidth parameters in 
> pf.conf. To put it another way, the readback of defined bandwidth does not 
> match the definition.
> 
> From observation it looks like specified bandwidths in pf.conf with 4 or more 
> digits followed by 'K' have the 3 LSD's dropped and 'K' changed to 'M' in the 
> output of "pfctl -squeue." Example: 1800K in pf.conf becomes 1M in the output 
> of "pfctl -squeue" -- a very significant difference.
> 
> It remains to be determined whether the fault is in the setting of bandwidth 
> by pf or in the reporting of bandwidth by pfctl. Actual tests with a simple 
> queue, as you suggested, could determine whether pf is enforcing the correct 
> bandwidth values. To make it easiest to see an error the optimum leaf queue 
> max bandwidth to use is 1999K. Then see whether the measured bandwidth is ~2M 
> or ~1M.
> 
> When I have time I'll do a simple test.
> 
> 
> 
> 
> On Sat, 5/13/17, Mike Belopuhov  wrote:
> 
>  Subject: Re: pf queue definition: bandwidth resolution problem
>  To: "Carl Mascott" 
>  Cc: misc@openbsd.org
>  Date: Saturday, May 13, 2017, 12:02 PM
>  
>  On Tue, May 09, 2017 at 19:47 +, Carl
>  Mascott wrote:
>  > Intel Atom D2500
>  1.66GHz
>  > OpenBSD i386 v6.1-stable
>  > 
>  > I can't get pf
>  to give me the queue bandwidths that I specify in
>  pf.conf.
>  > 
>  >
>  pf.conf:
>  > 
>  > queue
>  rootq on $ext_if bandwidth 9M max 9M qlimit 100
>  >         queue qdef parent rootq
>  bandwidth 3650K default
>  >        
>  queue qrtp parent rootq bandwidth 350K min 350K burst 700K
>  for 200ms
>  >         queue qweb parent
>  rootq bandwidth 4M
>  >         queue
>  qpri parent rootq bandwidth 900K min 50K burst 1800K for
>  200ms
>  >         queue qdns parent
>  rootq bandwidth 100K min 10K burst 200K for 1000ms
>  > 
>  > output of pfctl
>  -srules:
>  > 
>  > queue
>  rootq on bge0 bandwidth 9M, max 9M qlimit 100
>  > queue qdef parent rootq bandwidth 3M
>  default qlimit 50
>  > queue qrtp parent
>  rootq bandwidth 350K, min 350K burst 700K for 200ms qlimit
>  50
>  > queue qweb parent rootq bandwidth 4M
>  qlimit 50
>  > queue qpri parent rootq
>  bandwidth 900K, min 50K burst 1M for 200ms qlimit 50
>  > queue qdns parent rootq bandwidth 100K,
>  min 10K burst 200K for 1000ms qlimit 50
>  >
>  
>  > Discrepancies in the above:
>  > 
>  >             
>     defined         actual
>  >     
>             --         -
>  > qdef BW   3650K          3M
>  > qpri burst  1800K          1M
>  > 
>  > It looks like for
>  anything specified as abcdK the result is aM, i.e., for any
>  bandwidth >= 1000K the resulting bandwidth is
>  truncated (not rounded) to M, where
>   = most significant digit. Any bandwidth
>  < 1000K works correctly.
>  > 
>  > Is this a bug, a misfeature, or a
>  feature?
>  > Thanks!
>  >
>  

Index: sbin/pfctl/pfctl_parser.c
===
RCS file: /home/cvs/src/sbin/pfctl/pfctl_parser.c,v
retrieving revision 1.309
diff -u -p -r1.309 pfctl_parser.c
--- sbin/pfctl/pfctl_parser.c   26 Oct 2016 14:15:59 -  1.309
+++ sbin/pfctl/pfctl_parser.c   13 May 2017 19:18:19 -
@@ -1177,7 +1177,7 @@ print_bwspec(const char *prefix, struct 
printf("%s%u%%", prefix, bw->percent);
else if (bw->absolute) {
rate = bw->absolute;
-   for (i = 0; rate >= 1000 && i <= 3; i++)
+   for (i = 0; rate >= 1000 && i <= 3 && (rate % 1000 == 0); i++)
rate /= 1000;
printf("%s%u%c", prefix, rate, unit[i]);
}



Re: pf queue definition: bandwidth resolution problem

2017-05-13 Thread Carl Mascott
You missed the point. I didn't do any testing. I just looked at the output of 
"pfctl -squeue" (correction: In the original post I wrote "pfctl -srules") and 
noticed that the assigned queue bandwidths reported by pfctl were in some cases 
much different than the specified queue bandwidth parameters in pf.conf. To put 
it another way, the readback of defined bandwidth does not match the definition.

>From observation it looks like specified bandwidths in pf.conf with 4 or more 
>digits followed by 'K' have the 3 LSD's dropped and 'K' changed to 'M' in the 
>output of "pfctl -squeue." Example: 1800K in pf.conf becomes 1M in the output 
>of "pfctl -squeue" -- a very significant difference.

It remains to be determined whether the fault is in the setting of bandwidth by 
pf or in the reporting of bandwidth by pfctl. Actual tests with a simple queue, 
as you suggested, could determine whether pf is enforcing the correct bandwidth 
values. To make it easiest to see an error the optimum leaf queue max bandwidth 
to use is 1999K. Then see whether the measured bandwidth is ~2M or ~1M.

When I have time I'll do a simple test.



--------
On Sat, 5/13/17, Mike Belopuhov  wrote:

 Subject: Re: pf queue definition: bandwidth resolution problem
 To: "Carl Mascott" 
 Cc: misc@openbsd.org
 Date: Saturday, May 13, 2017, 12:02 PM
 
 On Tue, May 09, 2017 at 19:47 +, Carl
 Mascott wrote:
 > Intel Atom D2500
 1.66GHz
 > OpenBSD i386 v6.1-stable
 > 
 > I can't get pf
 to give me the queue bandwidths that I specify in
 pf.conf.
 > 
 >
 pf.conf:
 > 
 > queue
 rootq on $ext_if bandwidth 9M max 9M qlimit 100
 >         queue qdef parent rootq
 bandwidth 3650K default
 >        
 queue qrtp parent rootq bandwidth 350K min 350K burst 700K
 for 200ms
 >         queue qweb parent
 rootq bandwidth 4M
 >         queue
 qpri parent rootq bandwidth 900K min 50K burst 1800K for
 200ms
 >         queue qdns parent
 rootq bandwidth 100K min 10K burst 200K for 1000ms
 > 
 > output of pfctl
 -srules:
 > 
 > queue
 rootq on bge0 bandwidth 9M, max 9M qlimit 100
 > queue qdef parent rootq bandwidth 3M
 default qlimit 50
 > queue qrtp parent
 rootq bandwidth 350K, min 350K burst 700K for 200ms qlimit
 50
 > queue qweb parent rootq bandwidth 4M
 qlimit 50
 > queue qpri parent rootq
 bandwidth 900K, min 50K burst 1M for 200ms qlimit 50
 > queue qdns parent rootq bandwidth 100K,
 min 10K burst 200K for 1000ms qlimit 50
 >
 
 > Discrepancies in the above:
 > 
 >             
    defined         actual
 >     
            --         -
 > qdef BW   3650K          3M
 > qpri burst  1800K          1M
 > 
 > It looks like for
 anything specified as abcdK the result is aM, i.e., for any
 bandwidth >= 1000K the resulting bandwidth is
 truncated (not rounded) to M, where
  = most significant digit. Any bandwidth
 < 1000K works correctly.
 > 
 > Is this a bug, a misfeature, or a
 feature?
 > Thanks!
 >
 
 
 Borrowing is enabled by
 default, so it's hard to say what affects
 your test since you didn't specify how are
 you exactly testing this.
 Every queue in
 your setup can borrow up to 9M of bandwidth, preventing
 others from doing the same, especially since
 you're attempting to
 measure a 200ms
 burst.  How are you exactly doing this?
 
 I suggest you create a simple isolated test
 where you have only the
 specified traffic,
 e.g. generated with tcpbench.  Then you disable
 borrowing by specifying "max"
 bandwidth:
 
 queue rootq on $ext_if
 bandwidth 9M max 9M
     queue qdef
 parent rootq bandwidth 3650K max 3650K default
 
 W/o other queues.  Then you
 proceed to add others, with or without
 borrowing enabled undestanding what you're
 doing and what an how
 you're testing.
 
 Inside pfctl parser all
 bandwitdh specifications are converted to
 bits per second (where K means multiplied by
 1000 and M by 1.000.000).
 
 



Re: pf queue definition: bandwidth resolution problem

2017-05-13 Thread Mike Belopuhov
On Tue, May 09, 2017 at 19:47 +, Carl Mascott wrote:
> Intel Atom D2500 1.66GHz
> OpenBSD i386 v6.1-stable
> 
> I can't get pf to give me the queue bandwidths that I specify in pf.conf.
> 
> pf.conf:
> 
> queue rootq on $ext_if bandwidth 9M max 9M qlimit 100
> queue qdef parent rootq bandwidth 3650K default
> queue qrtp parent rootq bandwidth 350K min 350K burst 700K for 200ms
> queue qweb parent rootq bandwidth 4M
> queue qpri parent rootq bandwidth 900K min 50K burst 1800K for 200ms
> queue qdns parent rootq bandwidth 100K min 10K burst 200K for 1000ms
> 
> output of pfctl -srules:
> 
> queue rootq on bge0 bandwidth 9M, max 9M qlimit 100
> queue qdef parent rootq bandwidth 3M default qlimit 50
> queue qrtp parent rootq bandwidth 350K, min 350K burst 700K for 200ms qlimit 
> 50
> queue qweb parent rootq bandwidth 4M qlimit 50
> queue qpri parent rootq bandwidth 900K, min 50K burst 1M for 200ms qlimit 50
> queue qdns parent rootq bandwidth 100K, min 10K burst 200K for 1000ms qlimit 
> 50
> 
> Discrepancies in the above:
> 
> defined actual
> -- -
> qdef BW   3650K  3M
> qpri burst  1800K  1M
> 
> It looks like for anything specified as abcdK the result is aM, i.e., for any 
> bandwidth >= 1000K the resulting bandwidth is truncated (not rounded) to 
> M, where  = most significant digit. Any bandwidth < 
> 1000K works correctly.
> 
> Is this a bug, a misfeature, or a feature?
> Thanks!
> 

Borrowing is enabled by default, so it's hard to say what affects
your test since you didn't specify how are you exactly testing this.
Every queue in your setup can borrow up to 9M of bandwidth, preventing
others from doing the same, especially since you're attempting to
measure a 200ms burst.  How are you exactly doing this?

I suggest you create a simple isolated test where you have only the
specified traffic, e.g. generated with tcpbench.  Then you disable
borrowing by specifying "max" bandwidth:

queue rootq on $ext_if bandwidth 9M max 9M
queue qdef parent rootq bandwidth 3650K max 3650K default

W/o other queues.  Then you proceed to add others, with or without
borrowing enabled undestanding what you're doing and what an how
you're testing.

Inside pfctl parser all bandwitdh specifications are converted to
bits per second (where K means multiplied by 1000 and M by 1.000.000).



pf queue definition: bandwidth resolution problem

2017-05-09 Thread Carl Mascott
Intel Atom D2500 1.66GHz
OpenBSD i386 v6.1-stable

I can't get pf to give me the queue bandwidths that I specify in pf.conf.

pf.conf:

queue rootq on $ext_if bandwidth 9M max 9M qlimit 100
queue qdef parent rootq bandwidth 3650K default
queue qrtp parent rootq bandwidth 350K min 350K burst 700K for 200ms
queue qweb parent rootq bandwidth 4M
queue qpri parent rootq bandwidth 900K min 50K burst 1800K for 200ms
queue qdns parent rootq bandwidth 100K min 10K burst 200K for 1000ms

output of pfctl -srules:

queue rootq on bge0 bandwidth 9M, max 9M qlimit 100
queue qdef parent rootq bandwidth 3M default qlimit 50
queue qrtp parent rootq bandwidth 350K, min 350K burst 700K for 200ms qlimit 50
queue qweb parent rootq bandwidth 4M qlimit 50
queue qpri parent rootq bandwidth 900K, min 50K burst 1M for 200ms qlimit 50
queue qdns parent rootq bandwidth 100K, min 10K burst 200K for 1000ms qlimit 50

Discrepancies in the above:

defined actual
-- -
qdef BW   3650K  3M
qpri burst  1800K  1M

It looks like for anything specified as abcdK the result is aM, i.e., for any 
bandwidth >= 1000K the resulting bandwidth is truncated (not rounded) to 
M, where  = most significant digit. Any bandwidth < 
1000K works correctly.

Is this a bug, a misfeature, or a feature?
Thanks!



Re: pf queue bandwidth estimation

2016-05-13 Thread Stuart Henderson
On 2016/05/13 11:31, niya levi wrote:
> hi Stuart
> 
> On 13/05/16 08:32, Stuart Henderson wrote:
> > On 2016-05-12, niya levi  wrote:
> >> using broadbandspeedchecker.co.uk i measured the bandwidth on my virgin
> >> media line,
> >> the download speed varied form as low as 20Mb/sec up to 50Mb/sec
> >> depending on the time of day the test was run,
> > Queuing is done on the transmit side, so the bandwidth you should be
> > interested is upload, not download.
> >
> > You have already received the download traffic. You *can* queue when
> > you pass it on to another host but that doesn't have a direct effect
> > on what people on the internet send to you so however you do things
> > "download queueing" won't work reliably. If I send 1Gb/s of packets
> > to you, it doesn't matter what you do, it's going to starve out
> > other traffic and nothing you can do on your side of the link is
> > going to help.
> thanks understood
> 
> >> what will be the result if i put a value for the queue bandwidth which
> >> is greater or lesser the the maximum download speed ?
> > If lesser: transfers will be limited to a slower speed than is actually
> > available. This gives more predictable performance; queues work ok;
> > but total bandwidth will be reduced.
> >
> > If greater: you lose control over queueing as it will then be done on
> > a device upstream from you (e.g. a modem or router on the next hop
> > or later).
> i assume that this also applies to the upload speed

Yes.

> > If the times/bandwidths are fairly predictable then you could always
> > use a cronjob to switch config. (Setup variables in pf.conf to reference
> > in the 'queue' rules then you can override them like 'pfctl -Dbandwidth=20M
> > -Dbulk=3M -f /etc/pf.conf' rather than having a mess of separate files).
> > That way you don't lose too much at times when the ISP is coping, and
> > still don't have too many problems when they're overloaded.
> >
> > But hopefully your upload bandwidth is a lot more consistent throughout
> > the day anyway.
> >
> do you know of any software i can use to measure upload speeds ?

There's speedtest-cli (in packages), but as with any tests like this,
you're testing bandwidth to the speed test server (so this includes
your line, but also connectivity between your ISP and the test server,
and sometimes speedtest servers max out their own network connection).



Re: pf queue bandwidth estimation

2016-05-13 Thread Stuart Henderson
On 2016-05-12, niya levi  wrote:
> using broadbandspeedchecker.co.uk i measured the bandwidth on my virgin
> media line,
> the download speed varied form as low as 20Mb/sec up to 50Mb/sec
> depending on the time of day the test was run,

Queuing is done on the transmit side, so the bandwidth you should be
interested is upload, not download.

You have already received the download traffic. You *can* queue when
you pass it on to another host but that doesn't have a direct effect
on what people on the internet send to you so however you do things
"download queueing" won't work reliably. If I send 1Gb/s of packets
to you, it doesn't matter what you do, it's going to starve out
other traffic and nothing you can do on your side of the link is
going to help.

> what will be the result if i put a value for the queue bandwidth which
> is greater or lesser the the maximum download speed ?

If lesser: transfers will be limited to a slower speed than is actually
available. This gives more predictable performance; queues work ok;
but total bandwidth will be reduced.

If greater: you lose control over queueing as it will then be done on
a device upstream from you (e.g. a modem or router on the next hop
or later).

If the times/bandwidths are fairly predictable then you could always
use a cronjob to switch config. (Setup variables in pf.conf to reference
in the 'queue' rules then you can override them like 'pfctl -Dbandwidth=20M
-Dbulk=3M -f /etc/pf.conf' rather than having a mess of separate files).
That way you don't lose too much at times when the ISP is coping, and
still don't have too many problems when they're overloaded.

But hopefully your upload bandwidth is a lot more consistent throughout
the day anyway.



pf queue bandwidth estimation

2016-05-12 Thread niya levi
using broadbandspeedchecker.co.uk i measured the bandwidth on my virgin
media line,
the download speed varied form as low as 20Mb/sec up to 50Mb/sec
depending on the time of day the test was run,
what will be the result if i put a value for the queue bandwidth which
is greater or lesser the the maximum download speed ?
shadrock



pfqstat - records pf queue bytes in CSV format

2014-12-11 Thread Daniel Melameth
One of my favorite ports is pfstat.  I've used it religiously for
years with minor firewalls for bandwidth and queue graphs.  When ALTQ
was retired, pfstat could no longer graph my queues and this is still
the case today.  The correct behavior here would be for me to roll up
my college-level C sleeves and fix pfstat, but I did not take this
route and, instead, took the lesser route and wrote a script that
parses pfctl's output.  A big reason for me doing this was related to
Medium (I have no affiliation with them) releasing/open sourcing their
pretty and simple data visualization tool, Charted, and there are more
details about this at
https://medium.com/@sall/using-charted-2149df6bb0bd.

If you are interested, a snapshot of my queue graph is at
http://bink.mooo.com/~daniel/pub/pfqstat.jpg and you can currently
grab pfqstat from http://bink.mooo.com/~daniel/pub (read the initial
comments in the script before using).  Please know I am currently the
only person using this script, so it might be horribly broken for some
and I welcome most comments, criticism and patches.  Please do not use
misc@ for questions or issues with this script--email me directly.

Cheers.



Re: pf/queue questions

2014-09-24 Thread Dewey Hylton
> From: "Daniel Melameth" 
> Subject: Re: pf/queue questions
> 
> On Tue, Sep 23, 2014 at 9:39 AM, Dewey Hylton  wrote:
> > i have a site-to-site vpn setup across a 40Mbps wan link (average ~30ms
> > latency). one of its uses is for san replication, but of course management
> > traffic (ssh sessions, etc.) have to cross the link as well. without using
> > queues, at times the replication traffic is such that management traffic
> > suffers to the point of being unusable.
> >
> > so i setup queues, which "fixed" the management problem. but despite the
> > management bandwidth requirements being minimal, the san replication
> > traffic was then seen to plateau well below where i believe it should have
> > been.
> >
> > one specific thing i'm seeing with this particular configuration is that
> > san replication traffic tops out at 24Mbps, as seen on the wan circuit
> > itself (outside of openbsd). removing the queues results in 100% wan
> > utilization, even up to 100Mbps when the circuit is temporarily
> > reconfigured to allow it.
> 
> It's not clear to me in which direction or on what interface the SAN
> traffic is, but your 20Mb queue on $INETIF might be limiting your
> maximum throughput.  That said, you might also want to consider
> configuring qlimit and you can tweak this based on QLEN in systat
> queues.  Lastly, I recall henning@ saying queuing on VLANs is mostly
> useless, so you only want to apply altq to physical interfaces.

daniel, thanks for your input. after going back and reading henning's comments 
regarding queuing on vlans, i moved the queue definition to the physical 
interface and things are now working as expected.



Re: pf queue max bug

2014-09-24 Thread Atanas Vladimirov

Hi,
I think that I found something.
It occurs when I set max limit on i_bittor/b_bittor queues but I didn't 
set min limit.
I read pf.conf(5) many times and I didn't find that min and max must be 
used together.


In i386 I had this:

 queue rootq on $ExtIf bandwidth 100M max 100M
  queue inter parent rootq bandwidth 3M max 2950K
   queue i_ack parent inter bandwidth 2M, min 1M
   queue i_dns parent inter bandwidth 500K
   queue i_ntp parent inter bandwidth 300K
   queue i_web parent inter bandwidth 2M burst 2M for 1ms
   queue i_bulkparent inter bandwidth 170K
   queue i_bittor  parent inter bandwidth 30K, max 1400K default

  queue bg parent rootq bandwidth 40M max 39M
   queue b_ack parent bg bandwidth 15M, min 10M
   queue b_dns parent bg bandwidth 1M, min 1M
   queue b_ntp parent bg bandwidth 4M, min 4M
   queue b_rdc parent bg bandwidth 4M, min 4M
   queue b_web parent bg bandwidth 15M, min 15M burst 40M for 
5000ms, max 37M

   queue b_bulkparent bg bandwidth 8M, min 5M
   queue b_bittor  parent bg bandwidth 2M, max 29M

 and it worked as it should!

When I changed my server and I installed amd64 -current I just copied my 
working pf.conf.
After few days I noticed that max limit on some of my queues didn't work 
properly as expected.

Today I set min limit to i_bittor/b_bittor and it works again:

 queue rootq on $ExtIf bandwidth 100M max 100M
  queue inter parent rootq bandwidth 3M max 2950K
   queue i_ack parent inter bandwidth 2M, min 1M
   queue i_dns parent inter bandwidth 500K
   queue i_ntp parent inter bandwidth 300K
   queue i_web parent inter bandwidth 2M burst 2M for 1ms
   queue i_bulkparent inter bandwidth 170K
   queue i_bittor  parent inter bandwidth 30K, min 20K, max 1400K 
default


  queue bg parent rootq bandwidth 40M max 39M
   queue b_ack parent bg bandwidth 15M, min 10M
   queue b_dns parent bg bandwidth 1M, min 1M
   queue b_ntp parent bg bandwidth 4M, min 4M
   queue b_rdc parent bg bandwidth 4M, min 4M
   queue b_web parent bg bandwidth 15M, min 15M burst 40M for 
5000ms, max 37M

   queue b_bulkparent bg bandwidth 8M, min 5M
   queue b_bittor  parent bg bandwidth 2M, min 1M, max 29M

Is this a bug in pf or pf.conf(5) is not correct?
Thank for your time,
Atanas



Re: pf/queue questions

2014-09-23 Thread Daniel Melameth
On Tue, Sep 23, 2014 at 9:39 AM, Dewey Hylton  wrote:
> i have a site-to-site vpn setup across a 40Mbps wan link (average ~30ms
> latency). one of its uses is for san replication, but of course management
> traffic (ssh sessions, etc.) have to cross the link as well. without using
> queues, at times the replication traffic is such that management traffic
> suffers to the point of being unusable.
>
> so i setup queues, which "fixed" the management problem. but despite the
> management bandwidth requirements being minimal, the san replication
> traffic was then seen to plateau well below where i believe it should have
> been.
>
> one specific thing i'm seeing with this particular configuration is that
> san replication traffic tops out at 24Mbps, as seen on the wan circuit
> itself (outside of openbsd). removing the queues results in 100% wan
> utilization, even up to 100Mbps when the circuit is temporarily
> reconfigured to allow it.

It's not clear to me in which direction or on what interface the SAN
traffic is, but your 20Mb queue on $INETIF might be limiting your
maximum throughput.  That said, you might also want to consider
configuring qlimit and you can tweak this based on QLEN in systat
queues.  Lastly, I recall henning@ saying queuing on VLANs is mostly
useless, so you only want to apply altq to physical interfaces.

> pf.conf:
> ===
> ##
> # macros
>
> LANIF   = em0
> WANIF   = em1
> PFSYNC  = em2
> INETIF  = vlan2
> TWP2PIF = vlan3
>
> table persist { $PUBLIC1 $PUBLIC2 $REMOTEPUB3 $REMOTEPUB4
> 172.30.255.240/28 }
> table  persist { $PUBLIC1 $PUBLIC2 172.28.0.251
> 172.28.0.252 }
> table  persist { 10.200.0.0/16 192.168.0.0/16 172.16.0.0/12
> }
> table persist { 10.200.80.0/24 172.28.0.247 172.28.0.248
> 10.200.72.0/24 172.28.0.10 172.28.0.11 172.28.0.12 172.28.2.0/24 }
>
> ##
> # queues
>
> altq on $INETIF  cbq bandwidth 20Mb queue { ssh, sansync, std }
> altq on $TWP2PIF cbq bandwidth 35Mb queue { ssh, sansync, std }
>
> queue sansync   bandwidth 35%   priority 1  cbq(borrow ecn)
> queue std   bandwidth 50%   priority 2  cbq(default borrow)
> queue ssh   bandwidth 15%   priority 3  cbq(borrow ecn)
>
> ##
> # options
>
> set skip on lo
> set skip on enc0
> set skip on gif
> set skip on $PFSYNC
> set block-policy return
> set loginterface $WANIF
>
>
> ##
> # ftp proxy
>
> anchor "ftp-proxy/*"
> pass in quick on $LANIF inet proto tcp to any port ftp \
> divert-to 127.0.0.1 port 8021
>
>
> ##
> # match rules
>
> match in from  scrub (no-df random-id max-mss 1200)
>
>
> ##
> # filter rules
>
> block in log
> pass out
> pass out proto tcp all modulate state
>
> # site-to-site vpn
> pass in quick log proto esp from 
> pass in quick log proto udp from  port isakmp
>
> antispoof quick for { lo $LANIF }
>
> pass in quick proto carp from any to any
> pass in quick inet proto icmp from any to any icmp-type { echoreq echorep
> timex unreach }
>
> pass in quick on $LANIF to  queue sansync label sansync
> pass in quick on $LANIF proto tcp to port { ssh } queue ssh label ssh
>
> pass in quick on $LANIF proto tcp to port { 3389 } queue ssh label rdp
>
> pass in log on $LANIF queue std label std
>
>
>
> dmesg:
> =
> OpenBSD 5.4 (GENERIC.MP) #41: Tue Jul 30 15:30:02 MDT 2013



pf/queue questions

2014-09-23 Thread Dewey Hylton
i have a site-to-site vpn setup across a 40Mbps wan link (average ~30ms 
latency). one of its uses is for san replication, but of course management 
traffic (ssh sessions, etc.) have to cross the link as well. without using 
queues, at times the replication traffic is such that management traffic 
suffers to the point of being unusable. 

so i setup queues, which "fixed" the management problem. but despite the 
management bandwidth requirements being minimal, the san replication traffic 
was then seen to plateau well below where i believe it should have been.

one specific thing i'm seeing with this particular configuration is that san 
replication traffic tops out at 24Mbps, as seen on the wan circuit itself 
(outside of openbsd). removing the queues results in 100% wan utilization, even 
up to 100Mbps when the circuit is temporarily reconfigured to allow it.

i have to assume that i've misunderstood the documentation and am looking for 
some help. i'll paste the pf.conf below, followed by dmesg. we have a fairly 
complex network on each end of the vpn, and we do in fact need the nat that you 
will see though not for reasons related to the san replication traffic. i have 
no doubt that i've done things incorrectly even in areas seemingly unrelated to 
san replication, so feel free to fire away ...


pf.conf:
===
##
# macros

LANIF   = em0
WANIF   = em1
PFSYNC  = em2
INETIF  = vlan2
TWP2PIF = vlan3

table persist { $PUBLIC1 $PUBLIC2 $REMOTEPUB3 $REMOTEPUB4 
172.30.255.240/28 }
table  persist { $PUBLIC1 $PUBLIC2 172.28.0.251 172.28.0.252 }
table  persist { 10.200.0.0/16 192.168.0.0/16 172.16.0.0/12 }
table persist { 10.200.80.0/24 172.28.0.247 172.28.0.248 
10.200.72.0/24 172.28.0.10 172.28.0.11 172.28.0.12 172.28.2.0/24 }

##
# queues

altq on $INETIF  cbq bandwidth 20Mb queue { ssh, sansync, std }
altq on $TWP2PIF cbq bandwidth 35Mb queue { ssh, sansync, std }

queue sansync   bandwidth 35%   priority 1  cbq(borrow ecn) 
queue std   bandwidth 50%   priority 2  cbq(default borrow)
queue ssh   bandwidth 15%   priority 3  cbq(borrow ecn)

##
# options

set skip on lo
set skip on enc0
set skip on gif
set skip on $PFSYNC
set block-policy return
set loginterface $WANIF


##
# ftp proxy

anchor "ftp-proxy/*"
pass in quick on $LANIF inet proto tcp to any port ftp \
divert-to 127.0.0.1 port 8021


##
# match rules

match in from  scrub (no-df random-id max-mss 1200)


##
# filter rules

block in log
pass out 
pass out proto tcp all modulate state

# site-to-site vpn
pass in quick log proto esp from 
pass in quick log proto udp from  port isakmp

antispoof quick for { lo $LANIF }

pass in quick proto carp from any to any
pass in quick inet proto icmp from any to any icmp-type { echoreq echorep timex 
unreach }

pass in quick on $LANIF to  queue sansync label sansync
pass in quick on $LANIF proto tcp to port { ssh } queue ssh label ssh

pass in quick on $LANIF proto tcp to port { 3389 } queue ssh label rdp

pass in log on $LANIF queue std label std




dmesg:
=
OpenBSD 5.4 (GENERIC.MP) #41: Tue Jul 30 15:30:02 MDT 2013
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 8403169280 (8013MB)
avail mem = 8171749376 (7793MB)
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xb7fcb000 (82 entries)
bios0: vendor HP version "P80" date 11/08/2013
bios0: HP ProLiant DL320e Gen8 v2
acpi0 at bios0: rev 2
acpi0: sleep states S0 S4 S5
acpi0: tables DSDT FACP SPCR MCFG HPET  SPMI ERST APIC  BERT HEST DMAR 
 SSDT SSDT SSDT SSDT SSDT
acpi0: wakeup devices PCI0(S4)
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimcfg0 at acpi0 addr 0xb800, bus 0-63
acpihpet0 at acpi0: 14318179 Hz
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Xeon(R) CPU E3-1270 v3 @ 3.50GHz, 3492.44 MHz
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,HLE,AVX2,SMEP,BMI2,ERMS,INVPCID,RTM
cpu0: 256KB 64b/line 8-way L2 cache
cpu0: smt 0, core 0, package 0
cpu0: apic clock running at 99MHz
cpu1 at mainbus0: apid 2 (application processor)
cpu1: Intel(R) Xeon(R) CPU E3-1270 v3 @ 3.50GHz, 3491.92 MHz
cpu1: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,HLE,AVX2,SMEP,BMI2,ERMS,INVPCID,RTM
cpu1: 256KB 64b/line 8-way L2 cache
cpu1: smt 

pf/queue questions

2014-09-23 Thread Dewey Hylton
i have a site-to-site vpn setup across a 40Mbps wan link (average ~30ms
latency). one of its uses is for san replication, but of course management
traffic (ssh sessions, etc.) have to cross the link as well. without using
queues, at times the replication traffic is such that management traffic
suffers to the point of being unusable.

so i setup queues, which "fixed" the management problem. but despite the
management bandwidth requirements being minimal, the san replication
traffic was then seen to plateau well below where i believe it should have
been.

one specific thing i'm seeing with this particular configuration is that
san replication traffic tops out at 24Mbps, as seen on the wan circuit
itself (outside of openbsd). removing the queues results in 100% wan
utilization, even up to 100Mbps when the circuit is temporarily
reconfigured to allow it.

i have to assume that i've misunderstood the documentation and am looking
for some help. i'll paste the pf.conf below, followed by dmesg. we have a
fairly complex network on each end of the vpn, and we do in fact need the
nat that you will see though not for reasons related to the san replication
traffic. i have no doubt that i've done things incorrectly even in areas
seemingly unrelated to san replication, so feel free to fire away ...


pf.conf:
===
##
# macros

LANIF   = em0
WANIF   = em1
PFSYNC  = em2
INETIF  = vlan2
TWP2PIF = vlan3

table persist { $PUBLIC1 $PUBLIC2 $REMOTEPUB3 $REMOTEPUB4
172.30.255.240/28 }
table  persist { $PUBLIC1 $PUBLIC2 172.28.0.251
172.28.0.252 }
table  persist { 10.200.0.0/16 192.168.0.0/16 172.16.0.0/12
}
table persist { 10.200.80.0/24 172.28.0.247 172.28.0.248
10.200.72.0/24 172.28.0.10 172.28.0.11 172.28.0.12 172.28.2.0/24 }

##
# queues

altq on $INETIF  cbq bandwidth 20Mb queue { ssh, sansync, std }
altq on $TWP2PIF cbq bandwidth 35Mb queue { ssh, sansync, std }

queue sansync   bandwidth 35%   priority 1  cbq(borrow ecn)
queue std   bandwidth 50%   priority 2  cbq(default borrow)
queue ssh   bandwidth 15%   priority 3  cbq(borrow ecn)

##
# options

set skip on lo
set skip on enc0
set skip on gif
set skip on $PFSYNC
set block-policy return
set loginterface $WANIF


##
# ftp proxy

anchor "ftp-proxy/*"
pass in quick on $LANIF inet proto tcp to any port ftp \
divert-to 127.0.0.1 port 8021


##
# match rules

match in from  scrub (no-df random-id max-mss 1200)


##
# filter rules

block in log
pass out
pass out proto tcp all modulate state

# site-to-site vpn
pass in quick log proto esp from 
pass in quick log proto udp from  port isakmp

antispoof quick for { lo $LANIF }

pass in quick proto carp from any to any
pass in quick inet proto icmp from any to any icmp-type { echoreq echorep
timex unreach }

pass in quick on $LANIF to  queue sansync label sansync
pass in quick on $LANIF proto tcp to port { ssh } queue ssh label ssh

pass in quick on $LANIF proto tcp to port { 3389 } queue ssh label rdp

pass in log on $LANIF queue std label std




dmesg:
=
OpenBSD 5.4 (GENERIC.MP) #41: Tue Jul 30 15:30:02 MDT 2013
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 8403169280 (8013MB)
avail mem = 8171749376 (7793MB)
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xb7fcb000 (82 entries)
bios0: vendor HP version "P80" date 11/08/2013
bios0: HP ProLiant DL320e Gen8 v2
acpi0 at bios0: rev 2
acpi0: sleep states S0 S4 S5
acpi0: tables DSDT FACP SPCR MCFG HPET  SPMI ERST APIC  BERT HEST
DMAR  SSDT SSDT SSDT SSDT SSDT
acpi0: wakeup devices PCI0(S4)
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimcfg0 at acpi0 addr 0xb800, bus 0-63
acpihpet0 at acpi0: 14318179 Hz
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Xeon(R) CPU E3-1270 v3 @ 3.50GHz, 3492.44 MHz
cpu0:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUS
H,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX
,SMX,EST,TM2,SSSE3,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT
,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1
,HLE,AVX2,SMEP,BMI2,ERMS,INVPCID,RTM
cpu0: 256KB 64b/line 8-way L2 cache
cpu0: smt 0, core 0, package 0
cpu0: apic clock running at 99MHz
cpu1 at mainbus0: apid 2 (application processor)
cpu1: Intel(R) Xeon(R) CPU E3-1270 v3 @ 3.50GHz, 3491.92 MHz
cpu1:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUS
H,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX
,SMX,EST,TM2,SSSE3,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT
,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1
,HLE,AVX2,SMEP,BMI2,ERMS,INVPCID,RTM
cpu1: 256KB 64b/line 8-way L2 cache
cpu1: smt 0, core 1, pack

Re: pf queue max bug

2014-09-23 Thread Atanas Vladimirov

On 22.09.2014 23:23, Jacob L. Leifman wrote:

Hi,

I think you are hitting the edge case discussed earlier this month (by
stu@ henning@ and others and it might have been on tech@) -- due to
fairly low OS interrupt rate (baked in default is 100Hz), low bandwidth
queue limits on high-bandwidth pipes mostly do not work. Currently the
only offered "solution" was to rebuild the kernel with increased tick
rate. I recommend searching the archives, the subject was something
about "the new queue system".

-Jacob.



Hi,
I made a new kernel on top of GENERIC.MP with HZ=1000. Same behavior.
Also qlimit(50) never get full and pkts/bytes never get dropped.

[ns]/sys/arch/amd64/conf$ more HZ
#   $OpenBSD: GENERIC.MP,v 1.11 2014/09/03 07:44:33 blambert Exp $

include "arch/amd64/conf/GENERIC"

option  MULTIPROCESSOR
#option MP_LOCKDEBUG
option  HZ=1000
cpu*at mainbus?


pf.conf

 queue rootq on $ExtIf bandwidth 98M, max 99M
  queue inter parent rootq bandwidth 1M, max 2M default
  queue bg parent rootq bandwidth 10M, max 15M

[ns]~$ sysctl -a | grep kern.clo
kern.clockrate=tick = 1000, tickadj = 4, hz = 1000, profhz = 1000, 
stathz = 1000


pfctl -vvs queue
queue rootq on em0 bandwidth 98M, max 99M qlimit 50
  [ pkts:  0  bytes:  0  dropped pkts:  0 bytes: 
 0 ]

  [ qlength:   0/ 50 ]
  [ measured: 0.0 packets/s, 0 b/s ]
queue inter parent rootq on em0 bandwidth 1M, max 2M default qlimit 50
  [ pkts: 147059  bytes:  150743153  dropped pkts:  0 bytes: 
 0 ]

  [ qlength:   0/ 50 ]
  [ measured:  3751.0 packets/s, 30.76Mb/s ]
queue bg parent rootq on em0 bandwidth 10M, max 15M qlimit 50
  [ pkts:   1015  bytes: 107200  dropped pkts:  0 bytes: 
 0 ]

  [ qlength:   0/ 50 ]
  [ measured:27.2 packets/s, 23.16Kb/s ]

systat q

QUEUE BW SCH  PRIO PKTSBYTES   
DROP_P   DROP_B QLEN BORROW SUSPEN P/S B/S
rootq on em0 98M  00
000 0   0
 inter1M 191868  194045K
000  3004 3455760
 bg  10M   1297   135814
000324608


dmesg
OpenBSD 5.6-current (HZ) #0: Tue Sep 23 11:26:16 EEST 2014
vl...@ns.bsdbg.net:/sys/arch/amd64/compile/HZ
real mem = 6416760832 (6119MB)
avail mem = 6237212672 (5948MB)
.



Re: pf queue max bug

2014-09-22 Thread Atanas Vladimirov

On 22.09.2014 22:50, Atanas Vladimirov wrote:

Hi,
I rewrote my rulesets with no luck:

QUEUE   BW SCH  PRIO PKTSBYTES   DROP_P
DROP_B QLEN BORROW SUSPEN P/S B/S
rootq on em0   98M  000
00 0   0
 inter  1M 179572  214136K0
00   898 1232993
 bg10M   6360   7277640
00 3 308

queue rootq on em0 bandwidth 98M, max 99M qlimit 50
  [ pkts:  0  bytes:  0  dropped pkts:  0 bytes:
  0 ]

  [ qlength:   0/ 50 ]
  [ measured: 0.0 packets/s, 0 b/s ]
queue inter parent rootq on em0 bandwidth 1M, max 2M default qlimit 50
  [ pkts:  67209  bytes:   80035513  dropped pkts:  0 bytes:
  0 ]

  [ qlength:   0/ 50 ]
  [ measured:  1172.0 packets/s, 11.13Mb/s ]
queue bg parent rootq on em0 bandwidth 10M, max 15M qlimit 50
  [ pkts:   1858  bytes: 215486  dropped pkts:  0 bytes:
  0 ]

  [ qlength:   0/ 50 ]
  [ measured:32.5 packets/s, 30.58Kb/s ]



pf.conf

### Interfaces ###
 ExtIf ="em0"
 IntIf ="vlan41"
 Free  ="vlan81"

 sam = "192.168.1.18"

### Tables ###
  table  file "/etc/bgnets"
  table  persist
  table  persist

### Misc Options
 set loginterface $ExtIf
 set skip on { lo, enc0 }
 set limit table-entries 40  # Full list is 200k entries as of 
March 1


 Queueing 



 queue rootq on $ExtIf bandwidth 98M, max 99M
  queue inter parent rootq bandwidth 1M, max 2M default
  queue bg parent rootq bandwidth 10M, max 15M

 Translation and Filtering 
###


### BLOCK all in/out on all interfaces by default and log
 block return log on $ExtIf
 block return log on $IntIf
 block return log on $Free

### Network Address Translation (NAT with outgoing source port 
randomization)

 match out log on egress from $IntIf:network \
to any nat-to ($ExtIf:0)
 match out log on egress from $Free:network \
to any nat-to ($ExtIf:0)

### NAT from IntIf to FreeWifi
 match out log on $Free from $IntIf:network \
to $Free:network nat-to ($Free:0)

### Packet normalization ( "scrubbing" )
 match log on $ExtIf all scrub (random-id max-mss 1440)

### $ExtIf inbound 

# dns nsd
  pass in log on $ExtIf inet proto {tcp, udp} from any \
 to ($ExtIf) port domain set queue inter
  pass in log on $ExtIf inet proto {tcp, udp} from  \
 to ($ExtIf) port domain set queue bg

# OpenSMTPD
  pass in log quick on $ExtIf inet proto tcp from  \
 to ($ExtIf) port smtp set queue  inter rdr-to lo0
  pass in log on $ExtIf inet proto tcp from any \
 to ($ExtIf) port smtp rdr-to lo0 port spamd
  pass in log on $ExtIf inet proto tcp from  \
 to ($ExtIf) port smtp set queue  inter rdr-to lo0

# Nginx
  pass in log on $ExtIf inet proto tcp from any \
 to ($ExtIf) port {www, https} set queue  inter rdr-to lo0
  pass in log on $ExtIf inet proto tcp from  \
 to ($ExtIf) port {www, https} set queue bg rdr-to lo0


# Ntpd ( time server )
  pass in log on $ExtIf inet proto udp from any \
 to ($ExtIf) port ntp set queue inter
  pass in log on $ExtIf inet proto udp from  \
 to ($ExtIf) port ntp set queue bg

### End $ExtIf inbound ###

### $IntIf outbound ###

# Allow self to reach Lan
  pass out log on $IntIf inet proto {tcp, udp, icmp} from (self) \
 to $IntIf:network

### End $IntIf outbound ###

### $IntIf inbound ###

# Allow all out
  pass in log on $IntIf inet proto {tcp, udp, icmp} from $IntIf:network 
\

 to any

# Allow SamKnows to run it's tests
  pass in log on $IntIf inet proto {tcp, udp, icmp} from $sam \
 to any tag SAM

### End $IntIf inbound ###

### $ExtIf outbound ###

## TCP ##
# Queue default
  pass out log on $ExtIf inet proto tcp from ($ExtIf) \
 to any set queue inter
  pass out log on $ExtIf inet proto tcp from ($ExtIf) \
 to  set queue bg

# Queue dns
  pass out log on $ExtIf inet proto tcp from ($ExtIf) \
 to any port domain set queue inter
  pass out log on $ExtIf inet proto tcp from ($ExtIf) \
 to  port domain set queue bg

## UDP ##
# Queue default
  pass out log on $ExtIf inet proto udp from ($ExtIf) \
 to any set queue inter
  pass out log on $ExtIf inet proto udp from ($ExtIf) \
 to  set queue bg

# Queue dns
  pass out log on $ExtIf inet proto udp from ($ExtIf) \
 to any port domain set queue inter
  pass out log on $ExtIf inet proto udp from ($ExtIf) \
 to  port domain set queue bg

# Queue ntp
  pass out log on $ExtIf inet proto udp from ($ExtIf) \
 to any port ntp set queue inter
  pass out log on $ExtIf inet proto udp from ($ExtIf) \
 to  port ntp set queue bg

# ICMP
  pass out log on $ExtIf inet proto icmp from ($ExtIf) \
 to any set queue inter
  pass out log on $ExtIf inet proto icmp from ($ExtIf) \

Re: pf queue max bug

2014-09-22 Thread Atanas Vladimirov

Hi,
I rewrote my rulesets with no luck:

QUEUE   BW SCH  PRIO PKTSBYTES   DROP_P   DROP_B 
QLEN BORROW SUSPEN P/S B/S
rootq on em0   98M  0000 
   0 0   0
 inter  1M 179572  214136K00 
   0   898 1232993
 bg10M   6360   72776400 
   0 3 308


queue rootq on em0 bandwidth 98M, max 99M qlimit 50
  [ pkts:  0  bytes:  0  dropped pkts:  0 bytes: 
 0 ]

  [ qlength:   0/ 50 ]
  [ measured: 0.0 packets/s, 0 b/s ]
queue inter parent rootq on em0 bandwidth 1M, max 2M default qlimit 50
  [ pkts:  67209  bytes:   80035513  dropped pkts:  0 bytes: 
 0 ]

  [ qlength:   0/ 50 ]
  [ measured:  1172.0 packets/s, 11.13Mb/s ]
queue bg parent rootq on em0 bandwidth 10M, max 15M qlimit 50
  [ pkts:   1858  bytes: 215486  dropped pkts:  0 bytes: 
 0 ]

  [ qlength:   0/ 50 ]
  [ measured:32.5 packets/s, 30.58Kb/s ]



pf.conf

### Interfaces ###
 ExtIf ="em0"
 IntIf ="vlan41"
 Free  ="vlan81"

 sam = "192.168.1.18"

### Tables ###
  table  file "/etc/bgnets"
  table  persist
  table  persist

### Misc Options
 set loginterface $ExtIf
 set skip on { lo, enc0 }
 set limit table-entries 40  # Full list is 200k entries as of March 
1


 Queueing 



 queue rootq on $ExtIf bandwidth 98M, max 99M
  queue inter parent rootq bandwidth 1M, max 2M default
  queue bg parent rootq bandwidth 10M, max 15M

 Translation and Filtering 
###


### BLOCK all in/out on all interfaces by default and log
 block return log on $ExtIf
 block return log on $IntIf
 block return log on $Free

### Network Address Translation (NAT with outgoing source port 
randomization)

 match out log on egress from $IntIf:network \
to any nat-to ($ExtIf:0)
 match out log on egress from $Free:network \
to any nat-to ($ExtIf:0)

### NAT from IntIf to FreeWifi
 match out log on $Free from $IntIf:network \
to $Free:network nat-to ($Free:0)

### Packet normalization ( "scrubbing" )
 match log on $ExtIf all scrub (random-id max-mss 1440)

### $ExtIf inbound 

# dns nsd
  pass in log on $ExtIf inet proto {tcp, udp} from any \
 to ($ExtIf) port domain set queue inter
  pass in log on $ExtIf inet proto {tcp, udp} from  \
 to ($ExtIf) port domain set queue bg

# OpenSMTPD
  pass in log quick on $ExtIf inet proto tcp from  \
 to ($ExtIf) port smtp set queue  inter rdr-to lo0
  pass in log on $ExtIf inet proto tcp from any \
 to ($ExtIf) port smtp rdr-to lo0 port spamd
  pass in log on $ExtIf inet proto tcp from  \
 to ($ExtIf) port smtp set queue  inter rdr-to lo0

# Nginx
  pass in log on $ExtIf inet proto tcp from any \
 to ($ExtIf) port {www, https} set queue  inter rdr-to lo0
  pass in log on $ExtIf inet proto tcp from  \
 to ($ExtIf) port {www, https} set queue bg rdr-to lo0


# Ntpd ( time server )
  pass in log on $ExtIf inet proto udp from any \
 to ($ExtIf) port ntp set queue inter
  pass in log on $ExtIf inet proto udp from  \
 to ($ExtIf) port ntp set queue bg

### End $ExtIf inbound ###

### $IntIf outbound ###

# Allow self to reach Lan
  pass out log on $IntIf inet proto {tcp, udp, icmp} from (self) \
 to $IntIf:network

### End $IntIf outbound ###

### $IntIf inbound ###

# Allow all out
  pass in log on $IntIf inet proto {tcp, udp, icmp} from $IntIf:network 
\

 to any

# Allow SamKnows to run it's tests
  pass in log on $IntIf inet proto {tcp, udp, icmp} from $sam \
 to any tag SAM

### End $IntIf inbound ###

### $ExtIf outbound ###

## TCP ##
# Queue default
  pass out log on $ExtIf inet proto tcp from ($ExtIf) \
 to any set queue inter
  pass out log on $ExtIf inet proto tcp from ($ExtIf) \
 to  set queue bg

# Queue dns
  pass out log on $ExtIf inet proto tcp from ($ExtIf) \
 to any port domain set queue inter
  pass out log on $ExtIf inet proto tcp from ($ExtIf) \
 to  port domain set queue bg

## UDP ##
# Queue default
  pass out log on $ExtIf inet proto udp from ($ExtIf) \
 to any set queue inter
  pass out log on $ExtIf inet proto udp from ($ExtIf) \
 to  set queue bg

# Queue dns
  pass out log on $ExtIf inet proto udp from ($ExtIf) \
 to any port domain set queue inter
  pass out log on $ExtIf inet proto udp from ($ExtIf) \
 to  port domain set queue bg

# Queue ntp
  pass out log on $ExtIf inet proto udp from ($ExtIf) \
 to any port ntp set queue inter
  pass out log on $ExtIf inet proto udp from ($ExtIf) \
 to  port ntp set queue bg

# ICMP
  pass out log on $ExtIf inet proto icmp from ($ExtIf) \
 to any set queue inter
  pass out log on $ExtIf inet proto icmp from ($ExtIf) \
 to  set queue bg


Re: pf queue max bug

2014-09-21 Thread Atanas Vladimirov

On 21.09.2014 20:56, Kevin Gerrard wrote:
I was receiving this same error a few days ago. It was because I had a 
rule
that was referring to a table that was not there or something another. 
That
was the exact error I received. Finally figured that out and it has 
been

flawless since.

The rule to flush queues only is :pfctl -F queue


There is no such option, you may see pfctl(8)

[ns]~$ pfctl -F queue
pfctl: Unknown flush modifier 'queue'
usage: pfctl [-deghnPqrvz] [-a anchor] [-D macro=value] [-F modifier]
[-f file] [-i interface] [-K host | network]
[-k host | network | label | id] [-L statefile] [-o level] [-p 
device]
[-S statefile] [-s modifier [-R id]] [-t table -T command 
[address ...]]

[-x level]



I also had to reboot to get away from that error but it was something 
to do

with a table that was not right or a rule referring to the wrong table.
Can't remember why exactly but it was a typo.

As for max queues here is a simple few line queue rule to try and it is
tested and works. All it does is control MAX bandwidth on the interface 
for

me. It is tested and works.

#   Queues   #
##
queue download on $int_if bandwidth 10M, max 10M
   queue down parent download bandwidth 10M default
queue upload on $ext_if bandwidth 10M, max 10M
   queue up parent upload bandwidth 10M default

Keep in mind this is a fiber connection where AT&T lets us spike above 
our
25Mbit limit for a price, therefore we put this rule in to keep 
that

from happening.

Hope this helps.

Kevin Gerrard


Thanks for your time. I'm going to rewrite my rulesets with minimum for 
my needs and I'll test again.




Re: pf queue max bug

2014-09-21 Thread Atanas Vladimirov

Hi,
Is there any way to disable/flush (like with ALTQ) pf queues?
I tryed with `pfctl -d; pfctl -e; pfctl -f /etc/pf.conf' but I got an 
error:


pfctl: DIOCXCOMMIT: Invalid argument

The only reference I could find was this:

http://marc.info/?l=openbsd-tech&m=140421855720135&w=2

Is this a known behavior?
After this error the only way to load my rulesets was with reboot.
I still trying to figure out why my queues don't limit the max 
bandwidth.

Thanks for your time.

--
pf.conf
--

### Interfaces ###
 ExtIf ="em0"
 IntIf ="vlan41"
 Free  ="vlan81"
 lo0   ="127.0.0.1"

### Hosts ###
 vl="192.168.1.2"
 jl="192.168.1.3"
 ve="192.168.1.4"
 ntp="192.168.1.5"
 rpi="192.168.1.7"
 dpc11="192.168.1.11"
 sam="192.168.1.16"
 cs_serv="10.10.10.254"
 mc_serv="10.10.10.253"
 mc_serv1="10.10.10.252"
 r2_serv="10.10.10.240"
 w7_rdc ="10.10.10.241"
 dpc21="192.168.1.21"

### Ports ###
 low_ports = "0:1023"
 hi_ports  = "1024:65535"
 web   = "{20, 21, 22, 25, 80, 443, , 3389, 5900, 6000, , 
8080 }"

 ssh_extif = ""
 rdc   = "3389"
 rdc_extif = "4910"
 rdc_r2= "5511"
 rdc_w7= "5522"
 squid = "8080"
 squid_extif = "8080"
 vl_skype  = "30001"
 jl_skype  = "30002"
 ve_skype  = "30003"
 vl_torrent= "30004"
 jl_torrent= "30005"
 ve_torrent= "30006"
 vl_hfs= "8081"
 ftp_proxy = "8021"
 symux = "2100"
 ftp   = "21"
 vnc_ext   = "59001"
 vnc_int   = "5900"
 sftp  = "2"
 l2tp  = "{ 500, 1701, 4500 }"
 mine  = "25565"
 mine1 = "25566"
 trace = "33434:33498"
 cs16  = "27000:27018"
 q3= "27960:27963"
 ventrilo  = "3784"

### Queues, States and Types ###
 IcmpType ="icmp-type 8 code 0"
 SynState ="flags S/SAFR synproxy state"

### Tables ###
  table  file "/etc/bgnets"
  table  persist
  table  persist
  table  file "/etc/proxy_users"
  table  persist #{ 82.119.88.70 }

 Options 
##

### Misc Options
# set block-policy drop
 set loginterface $ExtIf
 set skip on { lo, enc0 }
# set optimization aggressive
 set limit table-entries 40  # Full list is 200k entries as of March 
1

# set state-defaults pflow

 Queueing 



 queue rootq on $ExtIf bandwidth 98M, max 99M
  queue inter parent rootq bandwidth 2M, max 3M
   queue i_ack parent inter bandwidth 1M, min 900K
   queue i_dns parent inter bandwidth 500K, min 400K
   queue i_ntp parent inter bandwidth 300K, min 200K
   queue i_web parent inter bandwidth 500K burst 2M for 1ms
   queue i_bulkparent inter bandwidth 170K
   queue i_bittor  parent inter bandwidth 30K, max 1400K default

  queue bg parent rootq bandwidth 39M, max 40M
   queue b_ack parent bg bandwidth 15M, min 10M
   queue b_dns parent bg bandwidth 1M, min 900K
   queue b_ntp parent bg bandwidth 4M, min 3900K
   queue b_rdc parent bg bandwidth 4M, min 3900K
   queue b_web parent bg bandwidth 10M, min 9M burst 40M for 5000ms, 
max 37M

   queue b_bulkparent bg bandwidth 5M, min 4M
   queue b_bittor  parent bg bandwidth 1M, max 2M

 Translation and Filtering 
###


### BLOCK all in/out on all interfaces by default and log
 block return log on $ExtIf
 block return log on $IntIf
 block return log on $Free
 block quick  log on $ExtIf from 

### Network Address Translation (NAT with outgoing source port 
randomization)

 match out log on egress from $IntIf:network \
to any nat-to ($ExtIf:0)
 match out log on egress from $Free:network \
to any nat-to ($ExtIf:0)
 match out log on egress from 192.168.3.0/24 \
to any nat-to ($ExtIf:0)

### NAT from IntIf to FreeWifi
 match out log on $Free from $IntIf:network \
to $Free:network nat-to ($Free:0)

### Packet normalization ( "scrubbing" )
 match log on $ExtIf all scrub (random-id max-mss 1440)

### Ftp ( secure ftp proxy for LAN )
 anchor "ftp-proxy/*"
 anchor vpn

### pppx
 pass log on pppx

 pass log proto esp set queue b_ack
# pass log proto gre set queue b_ack

### $ExtIf inbound 

# npppd
  pass in log on $ExtIf proto {tcp, udp} from  \
 to ($ExtIf) port $l2tp set queue b_dns

# dns nsd
  pass in log on $ExtIf inet proto {tcp, udp} from any \
 to ($ExtIf) port domain set queue i_dns
  pass in log on $ExtIf inet proto {tcp, udp} from  \
 to ($ExtIf) port domain set queue b_dns

# OpenSSH
  pass in log on $ExtIf inet proto tcp from  \
 to ($ExtIf) port $ssh_extif set queue b_ack rdr-to $lo0 port ssh

# OpenSMTPD
  pass in log quick on $ExtIf inet proto tcp from  \
 to ($ExtIf) port smtp set queue (i_web, i_ack) rdr-to lo0
  pass in log on $ExtIf inet proto tcp from any \
 to ($ExtIf) port smtp rdr-to lo0 port spamd
  pass in log on $ExtIf inet proto tcp from  \
 to ($ExtIf) port smtp set queue (i_web, i_ack) rdr-to lo0

# IMAPS/SMTPS
  pass in log on $ExtIf inet proto tcp from  \
 to ($ExtI

Re: pf queue max bug

2014-09-17 Thread Atanas Vladimirov

On 16.09.2014 20:00, Atanas Vladimirov wrote:

On 16.09.2014 19:32, Zé Loff wrote:

On Tue, Sep 16, 2014 at 01:07:00PM +0200, Henning Brauer wrote:

* Atanas Vladimirov  [2014-09-16 12:58]:
> As I said this was my working pf.conf for new queueing system on i386.
> I think that the problem is elsewhere. When you set the queue max bandwidth
> it must not exceed that value.

if the sums of the target bandwidth exceed interface speed or
min/target exceed max, all bets are off. fix your queue defs.


I was looking at pf.conf's man page and noticed that in some examples
the queue parameters appear separated by commas:

  queue ssh parent std bandwidth 10M, min 5M, max 25M

and in some cases without commas:

  queue  ssh_interactive parent ssh bandwidth 10M min 5M

Does this make a difference? And if not, should pf.conf be fixed for
consistency?

Cheers
Zé


--
Henning Brauer, h...@bsws.de, henn...@openbsd.org
BS Web Services GmbH, http://bsws.de, Full-Service ISP
Secure Hosting, Mail and DNS. Virtual & Dedicated Servers, Root to 
Fully Managed

Henning Brauer Consulting, http://henningbrauer.com/



Hi Henning,
Thanks for your respond but can you give me an advice on how to solve
this matter.
I changed my queues definitions (the sums didn't exceed interface
speed or min > max) with no luck:

   ^
min < max


 queue rootq on $ExtIf bandwidth 100M max 100M
  queue inter parent rootq bandwidth 3M max 3M
   queue i_ack parent inter bandwidth 1M, min 1M
   queue i_dns parent inter bandwidth 500K
   queue i_ntp parent inter bandwidth 300K
   queue i_web parent inter bandwidth 1M burst 2M for 1ms
   queue i_bulkparent inter bandwidth 170K
   queue i_bittor  parent inter bandwidth 30K, max 1400K default

  queue bg parent rootq bandwidth 40M max 40M
   queue b_ack parent bg bandwidth 15M, min 10M
   queue b_dns parent bg bandwidth 1M, min 1M
   queue b_ntp parent bg bandwidth 4M, min 4M
   queue b_rdc parent bg bandwidth 4M, min 4M
   queue b_web parent bg bandwidth 10M, min 9M burst 40M for 
5000ms, max 37M

   queue b_bulkparent bg bandwidth 5M, min 4M
   queue b_bittor  parent bg bandwidth 1M, max 2M

queue b_bittor parent bg on em0 bandwidth 1M, max 2M qlimit 50
  [ pkts:  54890  bytes:   79466769  dropped pkts:  0 bytes:
  0 ]

  [ qlength:   0/ 50 ]
  [ measured:  2701.8 packets/s, 31.28Mb/s ]

QUEUE BW SCH  PRIO PKTSBYTES
DROP_P   DROP_B QLEN BORROW SUSPEN P/S B/S
rootq on em0100M  00
 000 0   0
 inter3M  00
 000 0   0
  i_ack   1M  47177  3118394
 000   215   14246
  i_dns 500K35432191
 00010 954
  i_ntp 300K15113590
 000   1.0  89
  i_web   1M   1970   634138
 000102810
  i_bulk170K14328491
 000   1.0  41
  i_bittor   30K  19556  2837925
 00060   18395
 bg  40M  00
 000 0   0
  b_ack  15M   101160608
 000   1.0  57
  b_dns   1M 56 8423
 000 0   0
  b_ntp   4M43439016
 000   1.0  89
  b_rdc   4M  00
 000 0   0
  b_web  10M10714833
 000   1.0 108
  b_bulk  5M 32 2318
 000 0   0
  b_bittor1M 450367  636064K
 000  2264 3245446




Re: pf queue max bug

2014-09-16 Thread Atanas Vladimirov

On 16.09.2014 19:32, Zé Loff wrote:

On Tue, Sep 16, 2014 at 01:07:00PM +0200, Henning Brauer wrote:

* Atanas Vladimirov  [2014-09-16 12:58]:
> As I said this was my working pf.conf for new queueing system on i386.
> I think that the problem is elsewhere. When you set the queue max bandwidth
> it must not exceed that value.

if the sums of the target bandwidth exceed interface speed or
min/target exceed max, all bets are off. fix your queue defs.


I was looking at pf.conf's man page and noticed that in some examples
the queue parameters appear separated by commas:

  queue ssh parent std bandwidth 10M, min 5M, max 25M

and in some cases without commas:

  queue  ssh_interactive parent ssh bandwidth 10M min 5M

Does this make a difference? And if not, should pf.conf be fixed for
consistency?

Cheers
Zé


--
Henning Brauer, h...@bsws.de, henn...@openbsd.org
BS Web Services GmbH, http://bsws.de, Full-Service ISP
Secure Hosting, Mail and DNS. Virtual & Dedicated Servers, Root to 
Fully Managed

Henning Brauer Consulting, http://henningbrauer.com/



Hi Henning,
Thanks for your respond but can you give me an advice on how to solve 
this matter.
I changed my queues definitions (the sums didn't exceed interface speed 
or min > max) with no luck:


 queue rootq on $ExtIf bandwidth 100M max 100M
  queue inter parent rootq bandwidth 3M max 3M
   queue i_ack parent inter bandwidth 1M, min 1M
   queue i_dns parent inter bandwidth 500K
   queue i_ntp parent inter bandwidth 300K
   queue i_web parent inter bandwidth 1M burst 2M for 1ms
   queue i_bulkparent inter bandwidth 170K
   queue i_bittor  parent inter bandwidth 30K, max 1400K default

  queue bg parent rootq bandwidth 40M max 40M
   queue b_ack parent bg bandwidth 15M, min 10M
   queue b_dns parent bg bandwidth 1M, min 1M
   queue b_ntp parent bg bandwidth 4M, min 4M
   queue b_rdc parent bg bandwidth 4M, min 4M
   queue b_web parent bg bandwidth 10M, min 9M burst 40M for 5000ms, 
max 37M

   queue b_bulkparent bg bandwidth 5M, min 4M
   queue b_bittor  parent bg bandwidth 1M, max 2M

queue b_bittor parent bg on em0 bandwidth 1M, max 2M qlimit 50
  [ pkts:  54890  bytes:   79466769  dropped pkts:  0 bytes: 
 0 ]

  [ qlength:   0/ 50 ]
  [ measured:  2701.8 packets/s, 31.28Mb/s ]

QUEUE BW SCH  PRIO PKTSBYTES   
DROP_P   DROP_B QLEN BORROW SUSPEN P/S B/S
rootq on em0100M  00
000 0   0
 inter3M  00
000 0   0
  i_ack   1M  47177  3118394
000   215   14246
  i_dns 500K35432191
00010 954
  i_ntp 300K15113590
000   1.0  89
  i_web   1M   1970   634138
000102810
  i_bulk170K14328491
000   1.0  41
  i_bittor   30K  19556  2837925
00060   18395
 bg  40M  00
000 0   0
  b_ack  15M   101160608
000   1.0  57
  b_dns   1M 56 8423
000 0   0
  b_ntp   4M43439016
000   1.0  89
  b_rdc   4M  00
000 0   0
  b_web  10M10714833
000   1.0 108
  b_bulk  5M 32 2318
000 0   0
  b_bittor1M 450367  636064K
000  2264 3245446




Re: pf queue max bug

2014-09-16 Thread Zé Loff
On Tue, Sep 16, 2014 at 01:07:00PM +0200, Henning Brauer wrote:
> * Atanas Vladimirov  [2014-09-16 12:58]:
> > As I said this was my working pf.conf for new queueing system on i386.
> > I think that the problem is elsewhere. When you set the queue max bandwidth
> > it must not exceed that value.
> 
> if the sums of the target bandwidth exceed interface speed or
> min/target exceed max, all bets are off. fix your queue defs.

I was looking at pf.conf's man page and noticed that in some examples
the queue parameters appear separated by commas:

  queue ssh parent std bandwidth 10M, min 5M, max 25M

and in some cases without commas:

  queue  ssh_interactive parent ssh bandwidth 10M min 5M

Does this make a difference? And if not, should pf.conf be fixed for
consistency? 

Cheers
Zé

> -- 
> Henning Brauer, h...@bsws.de, henn...@openbsd.org
> BS Web Services GmbH, http://bsws.de, Full-Service ISP
> Secure Hosting, Mail and DNS. Virtual & Dedicated Servers, Root to Fully 
> Managed
> Henning Brauer Consulting, http://henningbrauer.com/
> 

-- 



Re: pf queue max bug

2014-09-16 Thread Henning Brauer
* Atanas Vladimirov  [2014-09-16 12:58]:
> As I said this was my working pf.conf for new queueing system on i386.
> I think that the problem is elsewhere. When you set the queue max bandwidth
> it must not exceed that value.

if the sums of the target bandwidth exceed interface speed or
min/target exceed max, all bets are off. fix your queue defs.

-- 
Henning Brauer, h...@bsws.de, henn...@openbsd.org
BS Web Services GmbH, http://bsws.de, Full-Service ISP
Secure Hosting, Mail and DNS. Virtual & Dedicated Servers, Root to Fully Managed
Henning Brauer Consulting, http://henningbrauer.com/



Re: pf queue max bug

2014-09-16 Thread Atanas Vladimirov

On 16.09.2014 12:36, Zé Loff wrote:

On Tue, Sep 16, 2014 at 10:20:34AM +0300, Atanas Vladimirov wrote:

Hi,
I moved my old "server" to a better hardware and I installed amd64 
-current
(old one was i386 following -current) and made a drop in replacement 
of

pf.conf.
The problem is that when I set a queue MAX speed limit it didn't work 
as it

should - for example b_bittor:

pf.conf:

 queue rootq on $ExtIf bandwidth 100M max 100M
  queue inter parent rootq bandwidth 3M max 2950K
   queue i_ack parent inter bandwidth 2M, min 1M
   queue i_dns parent inter bandwidth 500K
   queue i_ntp parent inter bandwidth 300K
   queue i_web parent inter bandwidth 2M burst 2M for 1ms
   queue i_bulkparent inter bandwidth 170K
   queue i_bittor  parent inter bandwidth 30K, max 1400K default

  queue bg parent rootq bandwidth 40M max 39M
   queue b_ack parent bg bandwidth 15M, min 10M
   queue b_dns parent bg bandwidth 1M, min 1M
   queue b_ntp parent bg bandwidth 4M, min 4M
   queue b_rdc parent bg bandwidth 4M, min 4M
   queue b_web parent bg bandwidth 15M, min 15M burst 40M for 
5000ms,

max 37M
   queue b_bulkparent bg bandwidth 8M, min 5M
   queue b_bittor  parent bg bandwidth 2M, max 2M


Why are some of your target bandwidths higher than the allowed maximum
bandwidths?

As I said this was my working pf.conf for new queueing system on i386.
I think that the problem is elsewhere. When you set the queue max 
bandwidth it must not exceed that value.




Re: pf queue max bug

2014-09-16 Thread Zé Loff
On Tue, Sep 16, 2014 at 10:36:21AM +0100, Zé Loff wrote:
> On Tue, Sep 16, 2014 at 10:20:34AM +0300, Atanas Vladimirov wrote:
> > Hi,
> > I moved my old "server" to a better hardware and I installed amd64 -current
> > (old one was i386 following -current) and made a drop in replacement of
> > pf.conf.
> > The problem is that when I set a queue MAX speed limit it didn't work as it
> > should - for example b_bittor:
> > 
> > pf.conf:
> > 
> >  queue rootq on $ExtIf bandwidth 100M max 100M
> >   queue inter parent rootq bandwidth 3M max 2950K
> >queue i_ack parent inter bandwidth 2M, min 1M
> >queue i_dns parent inter bandwidth 500K
> >queue i_ntp parent inter bandwidth 300K
> >queue i_web parent inter bandwidth 2M burst 2M for 1ms
> >queue i_bulkparent inter bandwidth 170K
> >queue i_bittor  parent inter bandwidth 30K, max 1400K default
> > 
> >   queue bg parent rootq bandwidth 40M max 39M
> >queue b_ack parent bg bandwidth 15M, min 10M
> >queue b_dns parent bg bandwidth 1M, min 1M
> >queue b_ntp parent bg bandwidth 4M, min 4M
> >queue b_rdc parent bg bandwidth 4M, min 4M
> >queue b_web parent bg bandwidth 15M, min 15M burst 40M for 5000ms,
> > max 37M
> >queue b_bulkparent bg bandwidth 8M, min 5M
> >queue b_bittor  parent bg bandwidth 2M, max 2M
> 
> Why are some of your target bandwidths higher than the allowed maximum
> bandwidths?
> 

Also, the sum of your min bandwidths on the bg "subqueues" sum up to
41M, whilst the max for the parent queue is 39M.

-- 



Re: pf queue max bug

2014-09-16 Thread Zé Loff
On Tue, Sep 16, 2014 at 10:20:34AM +0300, Atanas Vladimirov wrote:
> Hi,
> I moved my old "server" to a better hardware and I installed amd64 -current
> (old one was i386 following -current) and made a drop in replacement of
> pf.conf.
> The problem is that when I set a queue MAX speed limit it didn't work as it
> should - for example b_bittor:
> 
> pf.conf:
> 
>  queue rootq on $ExtIf bandwidth 100M max 100M
>   queue inter parent rootq bandwidth 3M max 2950K
>queue i_ack parent inter bandwidth 2M, min 1M
>queue i_dns parent inter bandwidth 500K
>queue i_ntp parent inter bandwidth 300K
>queue i_web parent inter bandwidth 2M burst 2M for 1ms
>queue i_bulkparent inter bandwidth 170K
>queue i_bittor  parent inter bandwidth 30K, max 1400K default
> 
>   queue bg parent rootq bandwidth 40M max 39M
>queue b_ack parent bg bandwidth 15M, min 10M
>queue b_dns parent bg bandwidth 1M, min 1M
>queue b_ntp parent bg bandwidth 4M, min 4M
>queue b_rdc parent bg bandwidth 4M, min 4M
>queue b_web parent bg bandwidth 15M, min 15M burst 40M for 5000ms,
> max 37M
>queue b_bulkparent bg bandwidth 8M, min 5M
>queue b_bittor  parent bg bandwidth 2M, max 2M

Why are some of your target bandwidths higher than the allowed maximum
bandwidths?



pf queue max bug

2014-09-16 Thread Atanas Vladimirov

Hi,
I moved my old "server" to a better hardware and I installed amd64 
-current (old one was i386 following -current) and made a drop in 
replacement of pf.conf.
The problem is that when I set a queue MAX speed limit it didn't work as 
it should - for example b_bittor:


pf.conf:

 queue rootq on $ExtIf bandwidth 100M max 100M
  queue inter parent rootq bandwidth 3M max 2950K
   queue i_ack parent inter bandwidth 2M, min 1M
   queue i_dns parent inter bandwidth 500K
   queue i_ntp parent inter bandwidth 300K
   queue i_web parent inter bandwidth 2M burst 2M for 1ms
   queue i_bulkparent inter bandwidth 170K
   queue i_bittor  parent inter bandwidth 30K, max 1400K default

  queue bg parent rootq bandwidth 40M max 39M
   queue b_ack parent bg bandwidth 15M, min 10M
   queue b_dns parent bg bandwidth 1M, min 1M
   queue b_ntp parent bg bandwidth 4M, min 4M
   queue b_rdc parent bg bandwidth 4M, min 4M
   queue b_web parent bg bandwidth 15M, min 15M burst 40M for 
5000ms, max 37M

   queue b_bulkparent bg bandwidth 8M, min 5M
   queue b_bittor  parent bg bandwidth 2M, max 2M

queue b_bittor parent bg on em0 bandwidth 2M, max 2M qlimit 50
  [ pkts:1441771  bytes: 2064477928  dropped pkts:  0 bytes: 
 0 ]

  [ qlength:   0/ 50 ]
  [ measured:  2043.4 packets/s, 23.69Mb/s ]

1 usersLoad 0.41 0.40 0.43 Tue Sep 16 
10:12:56 2014


QUEUE BW SCH  PRIO PKTSBYTES   
DROP_P   DROP_B QLEN BORROW SUSPEN P/S B/S
rootq on em0100M  00
000 0   0
 inter3M  00
000 0   0
  i_ack   2M   8365   527486   
44 23880 0   0
  i_dns 500K   1333   142263
000 0   0
  i_ntp 300K   1143   105017
000   1.0  89
  i_web   2M   2328   742242   
24 21970 0   0
  i_bulk170K   1482   242112
000 0   0
  i_bittor   30K  91768  9426828
000453024
 bg  40M  00
000 0   0
  b_ack  15M  23306  1428052
000 2 164
  b_dns   1M  10044  4865285
00028   18672
  b_ntp   4M   2819   256892
000 1 178
  b_rdc   4M  00
000 0   0
  b_web  15M800   203426   
23105560   1.0  97
  b_bulk  8M32823814
000 0   0
  b_bittor2M1649850 2310255K
000  2610 3796423


dmesg:
OpenBSD 5.6-current (GENERIC.MP) #374: Mon Sep 15 08:42:10 MDT 2014
t...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 6416760832 (6119MB)
avail mem = 6237216768 (5948MB)
..



Re: pf queue priq and set prio

2013-06-15 Thread Михаил Швецов

This all for test - to know how work queue priq and set prio.


15.6.2013 14:58:26 пользователь Stuart Henderson (s...@spacehopper.org) написал:

I have re-ordered the email to make it easier to reply..

On 2013-06-15, Михаил Швецов  wrote:
> Please help me
>
> I have 2 pf.conf files on the server. To example i exec
>
> ifconfig $int_if(em0) media 10baseT
>
> 1)
> set skip on lo
> altq on $int_if priq bandwidth 512Kb queue { qlan_ssh, qlan_def }
> queue qlan_ssh priority 1
> queue qlan_def priority 5 priq (default)
> block
> pass out
> pass in on $int_if
> match in on $int_if proto tcp to port ssh queue qlan_ssh

> When i run 1'st rules:
>
> it's working
>
> I login to this server by ssh and run sysctl and list statistics (vmstat,
> iostat on systat) and see all good.
>
> Then i copy by ftp(may samba) file 1Gig(example) it's copy by 60KB/sec, and i
> CAN'T SEE STATISTICS by ssh. Ssh session freeze.
>
> I may see that "queue qlan_ssh" - worked.

This is expected. High numbers take priority. All "priority 5"
traffic is sent before any "priority 1".

If you have so much high-priority traffic that it totally fills
the link, low-priority traffic will never be sent.



int_if i have 1Gb/sec (GLAN "Intel PRO/1000GT (82541GI)")
queue i make special 512Kb(60KB), to see that ssh freeze when i get file from 
(ftp, samba). And i can see it


> 2)
> set skip on lo
> block
> pass out
> pass in on $int_if
> match in on $int_if set prio 3 # May not wrote this rule, 3 - by default
> match in on $int_if proto tcp to port ssh set prio 1
>
> When i run 2'nd rules:
>
> it's not working
>
> I login to this server by ssh and run sysctl and list statistics (vmstat,
> iostat on systat) and see all good.
>
> Then i copy by ftp(may samba) file 1Gig(example) it's copy by 60KB/sec, and i
> SEE STATISTICS by ssh but with some DELAY. Ssh session with some delay.
>
> I may see that "set prio 1" - NOT worked.
>
> What am i doing wrong?
>
>

For now, "set prio" queues are only limited by NIC speed. If you only
have 512Kb bandwidth as in the first example, this won't do anything
useful yet.

Henning is working on bandwidth controls for this, but it is not
committed. http://undeadly.org/cgi?action=article&sid=20130606101818
has more information.




before second rule i run
ifconfig em0 media 10baseT (when copy file it's get 1000KB/sec) - it's minimum 
for that card)
i want minimum to see that ssh freeze when i get file from (ftp, samba) 
again(how in queue). but i don't see it.


P.S.
this another qestion
Can i do this and will it work?
3-d rule
on $int_if i have 100Mb/sec queue i crop traffice to 512Kb/sec. And how much 
set prio get: 100Mb/sec or 512Kb/sec

set skip on lo
altq on $int_if priq bandwidth 512Kb queue { qlan_def }
 queue qlan_def priority 5 priq (default)
block
pass out
pass in on $int_if
match in on $int_if set prio 3
match in on $int_if proto tcp to port ssh set prio 1



Re: pf queue priq and set prio

2013-06-15 Thread Stuart Henderson
I have re-ordered the email to make it easier to reply..

On 2013-06-15, Михаил Швецов  wrote:
> Please help me
>
> I have 2 pf.conf files on the server. To example i exec
>
> ifconfig $int_if(em0) media 10baseT
>
> 1)
> set skip on lo
> altq on $int_if priq bandwidth 512Kb queue { qlan_ssh, qlan_def }
> queue qlan_ssh priority 1
> queue qlan_def priority 5 priq (default)
> block
> pass out
> pass in on $int_if
> match in on $int_if proto tcp to port ssh queue qlan_ssh

> When i run 1'st rules:
>
> it's working
>
> I login to this server by ssh and run sysctl and list statistics (vmstat,
> iostat on systat) and see all good.
>
> Then i copy by ftp(may samba) file 1Gig(example) it's copy by 60KB/sec, and i
> CAN'T SEE STATISTICS by ssh. Ssh session freeze.
>
> I may see that "queue qlan_ssh" - worked.


This is expected. High numbers take priority. All "priority 5"
traffic is sent before any "priority 1".

If you have so much high-priority traffic that it totally fills
the link, low-priority traffic will never be sent.


> 2)
> set skip on lo
> block
> pass out
> pass in on $int_if
> match in on $int_if set prio 3 # May not wrote this rule, 3 - by default
> match in on $int_if proto tcp to port ssh set prio 1
>
> When i run 2'nd rules:
>
> it's not working
>
> I login to this server by ssh and run sysctl and list statistics (vmstat,
> iostat on systat) and see all good.
>
> Then i copy by ftp(may samba) file 1Gig(example) it's copy by 60KB/sec, and i
> SEE STATISTICS by ssh but with some DELAY. Ssh session with some delay.
>
> I may see that "set prio 1" - NOT worked.
>
> What am i doing wrong?
>
>

For now, "set prio" queues are only limited by NIC speed. If you only
have 512Kb bandwidth as in the first example, this won't do anything
useful yet.

Henning is working on bandwidth controls for this, but it is not
committed. http://undeadly.org/cgi?action=article&sid=20130606101818
has more information.



pf queue priq and set prio

2013-06-15 Thread Михаил Швецов
Please help me

I have 2 pf.conf files on the server. To example i exec

ifconfig $int_if(em0) media 10baseT

1)

set skip on lo

altq on $int_if priq bandwidth 512Kb queue { qlan_ssh, qlan_def }

queue qlan_ssh priority 1

queue qlan_def priority 5 priq (default)

block

pass out

pass in on $int_if

match in on $int_if proto tcp to port ssh queue qlan_ssh

2)

set skip on lo

block

pass out

pass in on $int_if

match in on $int_if set prio 3 # May not wrote this rule, 3 - by default

match in on $int_if proto tcp to port ssh set prio 1

When i run 1'st rules:

it's working

I login to this server by ssh and run sysctl and list statistics (vmstat,
iostat
on systat) and see all good.

Then i copy by ftp(may samba) file 1Gig(example) it's copy by 60KB/sec, and i
CAN'T SEE STATISTICS by ssh. Ssh session freeze.

I may see that "queue qlan_ssh" - worked.

When i run 2'nd rules:

it's not working

I login to this server by ssh and run sysctl and list statistics (vmstat,
iostat
on systat) and see all good.

Then i copy by ftp(may samba) file 1Gig(example) it's copy by 60KB/sec, and i
SEE
STATISTICS by ssh but with some DELAY. Ssh session with some delay.

I may see that "set prio 1" - NOT worked.

What am i doing wrong?



Re: Traffic through default pf queue

2011-10-17 Thread Henning Brauer
pftop's functionality is almost completely in systat these days.

* Michel Blais  [2011-10-17 19:36]:
> I think it's could be possible with pftop.
> 
> "Pftop is a small, curses-based utility for real-time display of active 
> states and rule statistics for pf, the packet filter 
> . for OpenBSD 
> .
> Current release pftop-0.7, written and maintained by Can E. Acar."
> source : http://www.eee.metu.edu.tr/~canacar/pftop/
> 
> Le 2011-10-17 08:40, Claudiu Pruna a C)crit :
> > Hi everyone,
> >
> > I have a question, could anyone give me an ideea how can I "see" (like
> > tcpdump or something) the traffic that is passing throught the default
> > queue of pf ?
> >
> > Thanks for your ideeas.
> >
> >
> >
> > 
> 
> 
> -- 
> Michel Blais
> Administrateur rC)seau / Network administrator
> Targo Communications
> www.targo.ca
> 514-448-0773
> 

-- 
Henning Brauer, h...@bsws.de, henn...@openbsd.org
BS Web Services, http://bsws.de, Full-Service ISP
Secure Hosting, Mail and DNS Services. Dedicated Servers, Root to Fully Managed
Henning Brauer Consulting, http://henningbrauer.com/



Re: Traffic through default pf queue

2011-10-17 Thread Peter N. M. Hansteen
Claudiu Pruna  writes:

>   I have a question, could anyone give me an ideea how can I "see" (like
> tcpdump or something) the traffic that is passing throught the default
> queue of pf ?

On OpenBSD, systat has a number of PF-related views worth exploring.  

For an overview of traffic by quees 'systat queues' may be what you're
looking for.  The other non-intrusive way to check (ie without editing
in tagging etc) would be 'pfctl -vvsr' -- if traffic matches rules that
do queue assignment, you'll see the counters.

-- 
Peter N. M. Hansteen, member of the first RFC 1149 implementation team
http://bsdly.blogspot.com/ http://www.bsdly.net/ http://www.nuug.no/
"Remember to set the evil bit on all malicious network traffic"
delilah spamd[29949]: 85.152.224.147: disconnected after 42673 seconds.



Re: Traffic through default pf queue

2011-10-17 Thread Michel Blais
I think it's could be possible with pftop.

"Pftop is a small, curses-based utility for real-time display of active 
states and rule statistics for pf, the packet filter 
. for OpenBSD 
.
Current release pftop-0.7, written and maintained by Can E. Acar."
source : http://www.eee.metu.edu.tr/~canacar/pftop/

Le 2011-10-17 08:40, Claudiu Pruna a C)crit :
>   Hi everyone,
>
>   I have a question, could anyone give me an ideea how can I "see" (like
> tcpdump or something) the traffic that is passing throught the default
> queue of pf ?
>
>   Thanks for your ideeas.
>
>
>
>   


-- 
Michel Blais
Administrateur rC)seau / Network administrator
Targo Communications
www.targo.ca
514-448-0773



Re: Traffic through default pf queue

2011-10-17 Thread Maxim Bourmistrov

Use "pass log" and "tag TAGGED" in pf rules,
then tcpdump -i pflog0

On 10/17/2011 02:40 PM, Claudiu Pruna wrote:

Hi everyone,

I have a question, could anyone give me an ideea how can I "see" (like
tcpdump or something) the traffic that is passing throught the default
queue of pf ?

Thanks for your ideeas.




Traffic through default pf queue

2011-10-17 Thread Claudiu Pruna
Hi everyone,

I have a question, could anyone give me an ideea how can I "see" (like
tcpdump or something) the traffic that is passing throught the default
queue of pf ?

Thanks for your ideeas.




-- 
Claudiu Pruna 



PF queue speed bug??

2010-09-04 Thread RLW

Hello,

I am using OpenBSD for a long time now, but recenty when I was testing
hi speed queues using altq and cbq i saw there is strange problem.

When queue is set to:
a)  5  mbit, trasfer rate between 2 computers is around 5mbit -> OK
b) 90 mbit, trasfer rate between 2 computers is around 90mbit -> OK
c) 50 mbit, trasfer rate between 2 computers is ONLY around 30mbit ->
WHY ?!?!?!?!???

I have tried changing mbit to kb, using % - no difference.
I have tested it on OpenBSD 4.7 and 4.2.

I was testing this speed using iperf, pktstat, transfering file using 
scp and wget.


Below is pf.conf:

set skip on lo

altq on xl1 cbq bandwidth 52Mb  queue { komp1_out2, komp2_out2, 
komp3_out2, domyslna_out2 }


queue komp1_out2bandwidth  500Kb   cbq
queue komp2_out2bandwidth  500Kb   cbq
queue komp3_out2bandwidth  50Mbcbq
queue domyslna_out2 bandwidth  1Mb cbq(default)


altq on xl0 cbq bandwidth 52Mb  queue { komp1_in2_wew, komp2_in2_wew, 
komp3_in2_wew, domyslna_in2_wew }


queue komp1_in2_wewbandwidth  500Kbcbq
queue komp2_in2_wewbandwidth  500Kbcbq
queue komp3_in2_wewbandwidth  50Mb cbq
queue domyslna_in2_wew bandwidth  1Mb  cbq (default)



pass  in quick on xl0 from 192.168.111.0/24 to any queue komp3_in2_wew
pass out quick on xl1 from 192.168.111.0/24 to any nat-to 10.0.0.4 queue 
komp3_out2



pass in quick  on xl1 proto tcp from any to 10.0.0.4 port {5001} rdr-to 
192.168.111.2 queue komp3_out2
pass out quick on xl0 proto tcp from any to 192.168.111.2 queue 
komp3_in2_wew



--
best regards,
RLW



PF: Queue parsing problem ?

2009-07-13 Thread Fernando Braga
Hello,

I'm setting up some queue discipline on one firewall, and I'm facing a
strange problem: the rules aren't assigning the packets to the correct
queue. As you can see below, they are going to inexistent qids, and
are ending in default queues.

I use this setup with assymetrical links, and it has been OK since
OpenBSD 3.9. As I'm setting up queues in OpenBSD 4.5 for the first
time, I found out it was working as it used to.

fmbraga:14$ sudo pfctl -g -sq
queue root_rl0 on rl0 bandwidth 100Mb priority 0 {speedy-up}
  [ qid=1 ifname=rl0 ifbandwidth=100Mb ]
queue  speedy-up on rl0 bandwidth 300Kb hfsc( red ecn realtime 300Kb
upperlimit 300Kb ) {Q-pri, Q-icmp, Q-voip, Q-biz, Q-ts, Q-http,
Q-mail, Q-def}
  [ qid=16 ifname=rl0 ifbandwidth=100Mb ]
queue   Q-pri on rl0 bandwidth 54Kb priority 7 hfsc( realtime 13Kb )
  [ qid=5 ifname=rl0 ifbandwidth=100Mb ]
queue   Q-icmp on rl0 bandwidth 13Kb priority 7 hfsc( realtime 13Kb )
  [ qid=6 ifname=rl0 ifbandwidth=100Mb ]
queue   Q-voip on rl0 bandwidth 54Kb priority 6 hfsc( realtime 54Kb )
  [ qid=7 ifname=rl0 ifbandwidth=100Mb ]
queue   Q-biz on rl0 bandwidth 54Kb priority 6 hfsc( realtime 13Kb )
  [ qid=8 ifname=rl0 ifbandwidth=100Mb ]
queue   Q-ts on rl0 bandwidth 27Kb priority 5 hfsc( realtime 13Kb )
  [ qid=9 ifname=rl0 ifbandwidth=100Mb ]
queue   Q-http on rl0 bandwidth 54Kb priority 4 hfsc( realtime 13Kb )
  [ qid=10 ifname=rl0 ifbandwidth=100Mb ]
queue   Q-mail on rl0 bandwidth 27Kb priority 4 hfsc( realtime 13Kb )
  [ qid=11 ifname=rl0 ifbandwidth=100Mb ]
queue   Q-def on rl0 bandwidth 13Kb priority 0 hfsc( default )
  [ qid=12 ifname=rl0 ifbandwidth=100Mb ]
queue root_sis0 on sis0 bandwidth 100Mb priority 0 {local, speedy-dn}
  [ qid=2 ifname=sis0 ifbandwidth=100Mb ]
queue  local on sis0 bandwidth 90Mb
  [ qid=3 ifname=sis0 ifbandwidth=100Mb ]
queue  speedy-dn on sis0 bandwidth 1.20Mb hfsc( red ecn realtime
1.20Mb upperlimit 1.20Mb ) {Q-pri, Q-icmp, Q-voip, Q-biz, Q-ts,
Q-http, Q-mail, Q-def}
  [ qid=15 ifname=sis0 ifbandwidth=100Mb ]
queue   Q-pri on sis0 bandwidth 54Kb priority 7 hfsc( realtime 13Kb )
  [ qid=5 ifname=sis0 ifbandwidth=100Mb ]
queue   Q-icmp on sis0 bandwidth 13Kb priority 7 hfsc( realtime 13Kb )
  [ qid=6 ifname=sis0 ifbandwidth=100Mb ]
queue   Q-voip on sis0 bandwidth 54Kb priority 6 hfsc( realtime 54Kb )
  [ qid=7 ifname=sis0 ifbandwidth=100Mb ]
queue   Q-biz on sis0 bandwidth 54Kb priority 6 hfsc( realtime 13Kb )
  [ qid=8 ifname=sis0 ifbandwidth=100Mb ]
queue   Q-ts on sis0 bandwidth 27Kb priority 5 hfsc( realtime 13Kb )
  [ qid=9 ifname=sis0 ifbandwidth=100Mb ]
queue   Q-http on sis0 bandwidth 54Kb priority 4 hfsc( realtime 13Kb )
  [ qid=10 ifname=sis0 ifbandwidth=100Mb ]
queue   Q-mail on sis0 bandwidth 27Kb priority 4 hfsc( realtime 13Kb )
  [ qid=11 ifname=sis0 ifbandwidth=100Mb ]
queue   Q-def on sis0 bandwidth 13Kb priority 0 hfsc( default )
  [ qid=12 ifname=sis0 ifbandwidth=100Mb ]
fmbraga:15$

fmbraga:16$ sudo pfctl -g -sr
@0 scrub in all fragment reassemble
  [ Skip steps: i=end d=end f=end p=end sa=end sp=end da=end dp=end ]
  [ queue: qname= qid=0 pqname= pqid=0 ]
@0 block drop log all
  [ Skip steps: p=9 sp=end da=11 dp=10 ]
  [ queue: qname= qid=0 pqname= pqid=0 ]
@1 block drop in quick on ! lo inet from 127.0.0.0/8 to any
  [ Skip steps: d=8 f=8 p=9 sp=end da=11 dp=10 ]
  [ queue: qname= qid=0 pqname= pqid=0 ]
@2 block drop in quick on ! sis0 inet from 172.16.8.0/24 to any
  [ Skip steps: i=7 d=8 f=8 p=9 sp=end da=11 dp=10 ]
  [ queue: qname= qid=0 pqname= pqid=0 ]
@3 block drop in quick on ! sis0 inet from 172.16.6.0/24 to any
  [ Skip steps: i=7 d=8 f=8 p=9 sp=end da=11 dp=10 ]
  [ queue: qname= qid=0 pqname= pqid=0 ]
@4 block drop in quick on ! sis0 inet from 172.16.12.0/24 to any
  [ Skip steps: i=7 d=8 f=8 p=9 sp=end da=11 dp=10 ]
  [ queue: qname= qid=0 pqname= pqid=0 ]
@5 block drop in quick on ! sis0 inet from 172.16.14.0/24 to any
  [ Skip steps: i=7 d=8 f=8 p=9 sp=end da=11 dp=10 ]
  [ queue: qname= qid=0 pqname= pqid=0 ]
@6 block drop in quick on ! sis0 inet from 172.16.15.0/24 to any
  [ Skip steps: d=8 f=8 p=9 sp=end da=11 dp=10 ]
  [ queue: qname= qid=0 pqname= pqid=0 ]
@7 block drop in quick inet from <__automatic_b3f9a813_0:6> to any
  [ Skip steps: i=11 p=9 sp=end da=11 dp=10 ]
  [ queue: qname= qid=0 pqname= pqid=0 ]

@8 pass out all flags S/SA keep state
  [ Skip steps: i=11 d=12 f=11 sa=11 sp=end da=11 dp=10 ]
  [ queue: qname= qid=0 pqname= pqid=0 ]

@9 pass out proto tcp all user = 515 flags S/SA keep state queue(q-http, q-pri)
  [ Skip steps: i=11 d=12 f=11 p=11 sa=11 sp=end da=11 ]
  [ queue: qname=q-http qid=4 pqname=q-pri pqid=14 ]
@10 pass out proto tcp from any to any port = www flags S/SA keep
state queue(q-http, q-pri)
  [ Skip steps: d=12 sp=end ]
  [ queue: qname=q-http qid=4 pqname=q-pri pqid=14 ]
@11 pass out on sis0 inet from 172.16.0.0/16 to 172.16.0.0/16 no state
queue q-local
  [ Skip steps: sp=end ]
  [ queue: qname=q-local qid=17 pqname= pqid=17 ]
@12 pass in on rl0 proto tcp fro

Re: PF Queue on a GROUP of nics?

2008-10-15 Thread Brian A. Seklecki
On Mon, 2008-10-06 at 16:39 +1100, Sunnz wrote:
> Is it possible?
> 
> Say I have a few nics of the same group... dc0 dc1 dc2 dc3... which
> all belong to a group "dc".

Sunnz

Do you mean a "shared queue" where "downstream" bandwidth from a single
"upstream" interface is proportionally divided into two "downstream"
subnets as it egresses two separate interfaces?

I was just revisiting that from2006?

Ping me back if so.

~BAS




IMPORTANT: This message contains confidential information and is intended only 
for the individual named. If the reader of this message is not an intended 
recipient (or the individual responsible for the delivery of this message to an 
intended recipient), please be advised that any re-use, dissemination, 
distribution or copying of this message is prohibited. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system.



Re: PF Queue on a GROUP of nics?

2008-10-06 Thread Henning Brauer
* Sunnz <[EMAIL PROTECTED]> [2008-10-06 16:59]:
> Ahhh ok... so what do I need to do this, 

write lots of code :)

> group, bridge, or something else?

bridge doesn't have queues either. that is just not how it works. one
still had to play the delay/drop games on the physical interfaces
(that have queues) but use a summary of all interfaces in the group to
do the math, or the like. lots of work that nobody has done. And I
don't see it getting done either; I for one am totally not interested
in that functionality nor writing the code.

-- 
Henning Brauer, [EMAIL PROTECTED], [EMAIL PROTECTED]
BS Web Services, http://bsws.de
Full-Service ISP - Secure Hosting, Mail and DNS Services
Dedicated Servers, Rootservers, Application Hosting - Hamburg & Amsterdam



Re: PF Queue on a GROUP of nics?

2008-10-06 Thread Sunnz
Ahhh ok... so what do I need to do this, group, bridge, or something else?

2008/10/7 Henning Brauer <[EMAIL PROTECTED]>:
> * Sunnz <[EMAIL PROTECTED]> [2008-10-06 07:44]:
>> Is it possible?
>
> no. groups don't have any queues to play queue tricks on.
>
> --
> Henning Brauer, [EMAIL PROTECTED], [EMAIL PROTECTED]
> BS Web Services, http://bsws.de
> Full-Service ISP - Secure Hosting, Mail and DNS Services
> Dedicated Servers, Rootservers, Application Hosting - Hamburg & Amsterdam
>
>



-- 
This e-mail may be confidential. You may not copy, forward or use any
part. All disclaimers on the Internet are of zero legal effectiveness.
http://www.goldmark.org/jeff/stupid-disclaimers/



Re: PF Queue on a GROUP of nics?

2008-10-06 Thread Henning Brauer
* Sunnz <[EMAIL PROTECTED]> [2008-10-06 07:44]:
> Is it possible?

no. groups don't have any queues to play queue tricks on.

-- 
Henning Brauer, [EMAIL PROTECTED], [EMAIL PROTECTED]
BS Web Services, http://bsws.de
Full-Service ISP - Secure Hosting, Mail and DNS Services
Dedicated Servers, Rootservers, Application Hosting - Hamburg & Amsterdam



Re: PF Queue on a GROUP of nics?

2008-10-06 Thread Sunnz
2008/10/6 Girish Venkatachalam <[EMAIL PROTECTED]>:
> No need to add a bridge.
>
> You are looking for ifconfig(8). Look for interface groups and you are
> done.
>
> -Girish
>
>

Oh, so just apply altq rules to the appropieate group and it will work?

That sounds great!! Thanks!!

-- 
This e-mail may be confidential. You may not copy, forward or use any
part. All disclaimers on the Internet are of zero legal effectiveness.
http://www.goldmark.org/jeff/stupid-disclaimers/



Re: PF Queue on a GROUP of nics?

2008-10-06 Thread Girish Venkatachalam
On 16:39:30 Oct 06, Sunnz wrote:
> Is it possible?
> 
> Say I have a few nics of the same group... dc0 dc1 dc2 dc3... which
> all belong to a group "dc".
> 
> And say if I wanted to limit the overall bandwidth for the group... so
> say at any point in time the overall outgoing bandwidth of the group
> dc will not be over 100mbp.
> 
> Would it work if I just apply altq to dc in pf?
> 
> Or do I need to bridge it... this is where I have no ideas... but say
> I add a bridge0 that contains dc0 dc1 dc3 dc2, and apply altq to
> bridge0 in pf.

No need to add a bridge.

You are looking for ifconfig(8). Look for interface groups and you are
done.

-Girish



PF Queue on a GROUP of nics?

2008-10-05 Thread Sunnz
Is it possible?

Say I have a few nics of the same group... dc0 dc1 dc2 dc3... which
all belong to a group "dc".

And say if I wanted to limit the overall bandwidth for the group... so
say at any point in time the overall outgoing bandwidth of the group
dc will not be over 100mbp.

Would it work if I just apply altq to dc in pf?

Or do I need to bridge it... this is where I have no ideas... but say
I add a bridge0 that contains dc0 dc1 dc3 dc2, and apply altq to
bridge0 in pf.

Regards,
Sunnz.

-- 
This e-mail may be confidential. You may not copy, forward or use any
part. All disclaimers on the Internet are of zero legal effectiveness.
http://www.goldmark.org/jeff/stupid-disclaimers/



Re: pf - queue filter directive sticky?

2008-10-01 Thread Henning Brauer
* (private) HKS <[EMAIL PROTECTED]> [2008-09-30 22:34]:
> Thanks, I overlooked that a default queue was required. With that in
> mind, then, does this section of pf.conf(5) imply that the queue
> directive is sticky?

pf.conf doesn't say it would be sticky anywhere, and, surprise, it
isn't.

-- 
Henning Brauer, [EMAIL PROTECTED], [EMAIL PROTECTED]
BS Web Services, http://bsws.de
Full-Service ISP - Secure Hosting, Mail and DNS Services
Dedicated Servers, Rootservers, Application Hosting - Hamburg & Amsterdam



Re: pf - queue filter directive sticky?

2008-09-30 Thread (private) HKS
> from pf.conf man page:
>
> default Packets not matched by another queue are assigned to this
> one.  Exactly one default queue is *required.*


Thanks, I overlooked that a default queue was required. With that in
mind, then, does this section of pf.conf(5) imply that the queue
directive is sticky?
"During the filtering component of pf.conf, the last referenced queue
name is where any packets from pass rules will be queued..."


> Why you just not use "quick" in the first rule?
>
> pass in quick on $int_if from 10.0.0.1 queue tens
>
> pass in on $int_if

This question is for clarity's sake: is the "quick" required?

-HKS



Re: pf - queue filter directive sticky?

2008-09-30 Thread Giancarlo Razzolini
(private) HKS escreveu:
>>> imho normally this packet wouldn't be queued because the last count
>>> matches the packet so the last rule applies:
>>>   
>
> This is what I assumed at first, but the stickiness of tags and the
> (seeming) logic of doing the same with queues made me second-guess
> myself.
>
>
>   
>> on the other hand:
>>
>> "During the filtering component of pf.conf, the last referenced
>> queue name is where any packets from pass rules will be queued..."
>>
>> that means because of the sequential order that the packet should be
>> queued imho.
>> 
>
> Is that the case, or does that mean that packets passed by a statement
> on an altq-enabled interface without an explicit "queue "
> directive are automatically assigned to the last defined queue?
>
> My initial tests suggest that the queue statements are not sticky (ie,
> my initial rules would not have queued it in the "tens" queue), but
> I'm still not sure.
>
> -HKS
>
>
>   
from pf.conf man page:

default Packets not matched by another queue are assigned to this
 one.  Exactly one default queue is *required.*


-- 
Giancarlo Razzolini
http://lock.razzolini.adm.br
Linux User 172199
Red Hat Certified Engineer no:804006389722501
Verify:https://www.redhat.com/certification/rhce/current/
Moleque Sem Conteudo Numero #002
OpenBSD Stable
Ubuntu 8.04 Hardy Heron
4386 2A6F FFD4 4D5F 5842  6EA0 7ABE BBAB 9C0E 6B85



Re: pf - queue filter directive sticky?

2008-09-30 Thread Rosen Iliev

Why you just not use "quick" in the first rule?

pass in quick on $int_if from 10.0.0.1 queue tens
pass in on $int_if

Rosen

(private) HKS wrote, On 9/29/2008 1:29 PM:

If the following two rules apply to a given packet in the order shown,
will the packet be queued?

pass in on $int_if from 10.0.0.1 queue tens
pass in on $int_if

I've not been able to find a clear answer in pf.conf(5) or the online
PF documentation. If I overlooked it, please let me know. Thanks in
advance for the help.

-HKS




Re: pf - queue filter directive sticky?

2008-09-30 Thread (private) HKS
>> imho normally this packet wouldn't be queued because the last count
>> matches the packet so the last rule applies:

This is what I assumed at first, but the stickiness of tags and the
(seeming) logic of doing the same with queues made me second-guess
myself.


> on the other hand:
>
> "During the filtering component of pf.conf, the last referenced
> queue name is where any packets from pass rules will be queued..."
>
> that means because of the sequential order that the packet should be
> queued imho.

Is that the case, or does that mean that packets passed by a statement
on an altq-enabled interface without an explicit "queue "
directive are automatically assigned to the last defined queue?

My initial tests suggest that the queue statements are not sticky (ie,
my initial rules would not have queued it in the "tens" queue), but
I'm still not sure.

-HKS



Re: pf - queue filter directive sticky?

2008-09-30 Thread Uwe Werler
Am Tue, 30 Sep 2008 10:53:05 +0200
schrieb [EMAIL PROTECTED]:

> Am Mon, 29 Sep 2008 15:29:08 -0400
> schrieb "(private) HKS" <[EMAIL PROTECTED]>:
> 
> > If the following two rules apply to a given packet in the order
> > shown, will the packet be queued?
> >
> > pass in on $int_if from 10.0.0.1 queue tens
> > pass in on $int_if
> >
> > I've not been able to find a clear answer in pf.conf(5) or the
> > online PF documentation. If I overlooked it, please let me know.
> > Thanks in advance for the help.
> >
> > -HKS
> 
> imho normally this packet wouldn't be queued because the last count
> matches the packet so the last rule applies:
> 
> from man pf.conf:
> 
> "For each packet processed by the packet filter, the filter rules
> are evaluated in sequential order, from first to last.  The last
> matching rule decides what action is taken.  If no rule matches the
> packet, the default action is to pass the packet."
> 
> uw
> 
> [demime 1.01d removed an attachment of type application/pgp-signature
> which had a name of signature.asc]
> 

on the other hand: 

"During the filtering component of pf.conf, the last referenced
queue name is where any packets from pass rules will be queued..."

that means because of the sequential order that the packet should be
queued imho.



Re: pf - queue filter directive sticky?

2008-09-30 Thread uw
Am Mon, 29 Sep 2008 15:29:08 -0400
schrieb "(private) HKS" <[EMAIL PROTECTED]>:

> If the following two rules apply to a given packet in the order shown,
> will the packet be queued?
>
> pass in on $int_if from 10.0.0.1 queue tens
> pass in on $int_if
>
> I've not been able to find a clear answer in pf.conf(5) or the online
> PF documentation. If I overlooked it, please let me know. Thanks in
> advance for the help.
>
> -HKS

imho normally this packet wouldn't be queued because the last count
matches the packet so the last rule applies:

from man pf.conf:

"For each packet processed by the packet filter, the filter rules
are evaluated in sequential order, from first to last.  The last
matching rule decides what action is taken.  If no rule matches the
packet, the default action is to pass the packet."

uw

[demime 1.01d removed an attachment of type application/pgp-signature which had 
a name of signature.asc]



pf - queue filter directive sticky?

2008-09-29 Thread (private) HKS
If the following two rules apply to a given packet in the order shown,
will the packet be queued?

pass in on $int_if from 10.0.0.1 queue tens
pass in on $int_if

I've not been able to find a clear answer in pf.conf(5) or the online
PF documentation. If I overlooked it, please let me know. Thanks in
advance for the help.

-HKS



pf+queue+pass in+statfeful out

2008-02-27 Thread S. Scott Sima, CISA, CISM
I know queuing only applies to outbound traffic. I'm using "ssh -w"
tunnelling to the pf+gateway.  I, therefore, have

pass in on #ext_if inet proto tcp ... keep state queue (QSHH, QLOWLAT), 

which, if I understand correctly, should assign the stateful
reply/return (outbound) traffic be queued on QSHH and QLOWLAT
accordingly.

It doesn't do so.

1. With the queue(QSHH,QLOWLAT) arguments in place, there is NO
returning traffic flow.  Return traffic seems to vanish.  pflog0 is
silent on any blocking.

2. The QSSH stats (pfctl -vvsq) counters are zero and remain at zero.

If I use the identical rule sets but omit the "queue(QSHH,QLOWLAT)"
options, reply traffic flows correctly, except no queuing.

The queues are working for everything else (default, voip, lowlat, etc).

The /etc/pf.conf fragment follows

--snip--
# -v-
pass in log quick on $ext_if inet proto tcp \
 from ! to ($ext_if:0) \
 flags S/SA keep state \
 (max-src-conn-rate 3/120, overload  flush global) \
 queue(QSHH,QLOWLAT) label SSHVPNGRP
#
pass in log quick on tun0 inet \
 from (tun0:peer) to any \
 tag VTUN keep state label SSHVPNGRP
#
pass out log quick on $int_if inet \
 tagged VTUN keep state label SSHVPNGRP
# -^-
--end-snip-- 

It's as if there needs to be a pass out, but ??? because "state" is
handling that.

Thanks,



mclpool limit reached - pf queue dropping

2008-01-16 Thread G 0kita
Hello everyone.  I'm seeing the mclpool limit reached error.
I'm intending on replacing a transparent firewall running OpenBSD3.6 with
one running 4.2, and in the testing phase I've noticed an interesting
problem.
The intention is to have traffic coming in on interface A (trunk0 - bge0 and
bge1 loadbalanced) bridging onto interface B (em0) and interface C (em2).
At the moment I have interface B connected and traffic is getting to the new
firewall through the old firewall.  Of course the new one is seeing traffic
coming in on an interface it doesn't expect, but that's not the issue I want
to discuss.
With only interface B connected I end up eventually getting the mclpool
limit reached error.
Looks like as the queue gets filled up (it's got nowhere to go, of course)
when it drops packets it doesn't properly release the allocated memory.
Check out the vmstat 3 commands down.
I've already got the 004pf patch compiled in, and I just added the 005 patch
and rebuilt and there's the same behaviour.
I won't see this problem in production, but it could mean that if a link
goes down eventually the firewall will require a reboot before properly
functioning.
Any comments?

# uname -a
OpenBSD xx.xx.xx 4.2 GENERIC.MP#0 i386

# dmesg | tail -1
WARNING: mclpool limit reached; increase kern.maxclusters

# sysctl kern.maxclusters
kern.maxclusters=6144

# vmstat -m | grep -e Releases -e mclpl ; pfctl -vsq|grep -B2 50/ ; sleep 10
; vmstat -m | grep -e Releases -e mclpl ; pfctl -vsq|grep -B2 50/
NameSize Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg Maxpg
Idle
mclpl   204848971 111592  38411  5285 0  5285  5285 4
61444
queue  dmz-low-priority on em2 bandwidth 5Mb cbq( borrow default )
  [ pkts:  0  bytes:  0  dropped pkts:   7055 bytes: 845459
]
  [ qlength:  50/ 50  borrows:  0  suspends:  0 ]
--
queue  svr-low-priority on em1 bandwidth 5Mb cbq( borrow default )
  [ pkts:  0  bytes:  0  dropped pkts:   7054 bytes: 845361
]
  [ qlength:  50/ 50  borrows:  0  suspends:  0 ]
NameSize Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg Maxpg
Idle
mclpl   204849149 111592  38556  5304 0  5304  5304 4
61446
queue  dmz-low-priority on em2 bandwidth 5Mb cbq( borrow default )
  [ pkts:  0  bytes:  0  dropped pkts:   7083 bytes: 848704
]
  [ qlength:  50/ 50  borrows:  0  suspends:  0 ]
--
queue  svr-low-priority on em1 bandwidth 5Mb cbq( borrow default )
  [ pkts:  0  bytes:  0  dropped pkts:   7082 bytes: 848606
]
  [ qlength:  50/ 50  borrows:  0  suspends:  0 ]

# ifconfig|grep -e flags -e media -e trunkp
lo0: flags=8049 mtu 33208
em0: flags=8943 mtu 1500
media: Ethernet autoselect (100baseTX full-duplex,rxpause,txpause)
em1: flags=8943 mtu 1500
media: Ethernet autoselect (none)
em2: flags=8943 mtu 1500
media: Ethernet autoselect (none)
em3: flags=8843 mtu 1500
media: Ethernet autoselect (none)
bge0: flags=8943 mtu 1500
media: Ethernet autoselect (none)
bge1: flags=8943 mtu 1500
media: Ethernet autoselect (none)
enc0: flags=0<> mtu 1536
trunk0: flags=8943 mtu 1500
trunk: trunkproto loadbalance
trunkport bge1
trunkport bge0 master
media: Ethernet autoselect
bridge0: flags=41 mtu 1500
pflog0: flags=141 mtu 33208

# brconfig bridge0|grep -v 'flags=0'
bridge0: flags=41
priority 32768 hellotime 2 fwddelay 15 maxage 20 holdcnt 6 proto
rstp
designated: id 00:00:00:00:00:00 priority 0
trunk0 flags=3
port 9 ifpriority 0 ifcost 0
em0 flags=3
port 1 ifpriority 0 ifcost 0
em1 flags=3
port 2 ifpriority 0 ifcost 0
em2 flags=3
port 3 ifpriority 0 ifcost 0
em3 flags=100
Addresses (max cache: 100, timeout: 240):

# grep -e svr-low-priority -e dmz-low-priority pf-*
pf-dmz.conf:pass out on $dmz_if inet proto icmp from any to 
icmp-type 8 code 0 queue dmz-low-priority
pf-dmz.conf:pass out on $dmz_if inet proto icmp from any to 
icmp-type 11 code 0 queue dmz-low-priority
pf-dmz.conf:pass out on $dmz_if inet proto tcp from any to 
port { 80 443 } queue dmz-low-priority
pf-ece.conf:pass out on $svr_if inet proto icmp from any to 
icmp-type 8 code 0 queue svr-low-priority
pf-ece.conf:pass out on $svr_if inet proto icmp from any to 
icmp-type 11 code 0 queue svr-low-priority
pf-ece.conf:pass out on $svr_if inet proto udp from any to 
port 123 queue svr-low-priority
pf-ece.conf:pass out on $svr_if inet proto { tcp udp } from any to
 port 53 queue svr-low-priority
pf-eng.conf:pass out on $svr_if inet proto icmp from any to 
icmp-type 8 code 0 queue svr-low-priority
pf-eng.conf:pass out on $svr_if inet proto icmp from any to 
icmp-type 11 code 0 queue svr-low-priority
pf-nix.conf:pass out on $svr_if inet proto tcp from any to 
port 80 queue svr-low-priority
pf-nix.conf:pass out on $s

Re: pf queue skipping

2006-08-23 Thread Jason Dixon

On Aug 23, 2006, at 7:26 AM, Lawrence Horvath wrote:


Yes it says its only "useful" for outbound, that doesnt mean that it
shoudnt still try to queue inbound, which it does sorta do as per my
pfctl -vvs queue, however it skips on parent queue for some reason


Try reading the entire paragraph.

--
Jason Dixon
DixonGroup Consulting
http://www.dixongroup.net



Re: pf queue skipping

2006-08-23 Thread Lawrence Horvath

Yes it says its only "useful" for outbound, that doesnt mean that it
shoudnt still try to queue inbound, which it does sorta do as per my
pfctl -vvs queue, however it skips on parent queue for some reason

On 8/23/06, Jason Dixon <[EMAIL PROTECTED]> wrote:

On Aug 23, 2006, at 6:28 AM, Lawrence Horvath wrote:

> I have the following config for my pf.conf and i noticed that nothing
> shows in the queues for incomming:
>
> 
>
> at this time i was transfering files into the server and it was not
> showing in the incomming queues, not sure why, i know its hard to
> "limit" incomming traffic, but this doesnt even show the traffic to
> start with

http://www.openbsd.org/faq/pf/queueing.html

Read the 2nd paragraph under the first section.

--
Jason Dixon
DixonGroup Consulting
http://www.dixongroup.net







--
-Lawrence



Re: pf queue monitoring

2006-08-23 Thread tony sarendal
On 23/08/06, Julien TOUCHE <[EMAIL PROTECTED]> wrote:
>
> tony sarendal wrote on 22/08/2006 08:32:
> > I wrote a script to generate graphs for the queues using python and
> > rrdtool a while back when I needed it, although it only works with
> > CBQ. http://www.prefixmaster.com/eyeonpf.php
> >
>
> awesome tool. i try it yesterday evening and it is really simple to make
> it work.
>
> two questions:
> - is it possible or plan to make it work on a remote system: maybe
> generate data on a host and graph only on other ?
>

Ok, having  a boring day at work.
I updated the script so the shell commands are defined in the beginning.
Now you just need to change those four commands so they execute
on the remote host using ssh (or whatever) and you are up and running.

Back to work...

/Tony

-- 
Tony Sarendal - [EMAIL PROTECTED]
IP/Unix
   -= The scorpion replied,
   "I couldn't help it, it's my nature" =-



Re: pf queue skipping

2006-08-23 Thread Jason Dixon

On Aug 23, 2006, at 6:28 AM, Lawrence Horvath wrote:


I have the following config for my pf.conf and i noticed that nothing
shows in the queues for incomming:



at this time i was transfering files into the server and it was not
showing in the incomming queues, not sure why, i know its hard to
"limit" incomming traffic, but this doesnt even show the traffic to
start with


http://www.openbsd.org/faq/pf/queueing.html

Read the 2nd paragraph under the first section.

--
Jason Dixon
DixonGroup Consulting
http://www.dixongroup.net



pf queue skipping

2006-08-23 Thread Lawrence Horvath

I have the following config for my pf.conf and i noticed that nothing
shows in the queues for incomming:

##BEGIN_QUEUES##
altq on tl0 cbq bandwidth 3000Kb qlimit 200 queue { traffic_out, traffic_in }

queue traffic_out bandwidth 1500Kb qlimit 200 cbq { \
other_out, ssh_out, ftp_data_out, ftp_control_out, http_out }

queue traffic_in  bandwidth 1500Kb qlimit 200 cbq { \
other_in,  ssh_in,  ftp_data_in,  ftp_control_in,  http_in  }

queue other_out bandwidth 100Kb qlimit 200 cbq (default, borrow)
  queue ssh_out bandwidth 100Kb qlimit 200 cbq (borrow)
  queue http_out bandwidth 200Kb qlimit 200 cbq (borrow)
  queue ftp_control_out bandwidth 100Kb qlimit 200 cbq (borrow)
queue ftp_data_out bandwidth 1000Kb qlimit 200 cbq

queue other_in  bandwidth 100Kb qlimit 200 cbq ( borrow )
  queue ssh_in  bandwidth 100Kb qlimit 200 cbq (borrow)
  queue http_in  bandwidth 200Kb qlimit 200 cbq (borrow)
  queue ftp_control_in  bandwidth 100Kb qlimit 200 cbq (borrow)
  queue ftp_data_in  bandwidth 1000Kb qlimit 200 cbq
##END_QUEUES##

##BEGIN_PACKETFILTERS##
block in on tl0 from any to any
pass in on tl0 proto tcp from any to any port 22 queue ssh_in
pass in on tl0 proto tcp from any to any port 20 queue ftp_data_in
pass in on tl0 proto tcp from any to any port 21 queue ftp_control_in
pass in on tl0 proto tcp from any to any port 80 queue http_in
pass in on tl0 proto udp from any to any port 53
pass in on tl0 proto icmp from any to any queue other_in

pass out on tl0 from any to any queue other_out keep state
pass out on tl0 proto tcp from any port 22 to any queue ssh_out
pass out on tl0 proto tcp from any port 20 to any queue ftp_data_out keep state
pass out on tl0 proto tcp from any port 21 to any queue ftp_control_out
pass out on tl0 proto tcp from any port 80 to any queue http_out
block out on tl0 proto icmp from any to any
##END_PACKETFILTERS##





queue root_tl0 bandwidth 3Mb priority 0 qlimit 200 cbq( wrr root )
{traffic_out, traffic_in}
 [ pkts:  44766  bytes:2785500  dropped pkts:  0 bytes:  0 ]
 [ qlength:   0/200  borrows:  0  suspends:  0 ]
 [ measured:   410.6 packets/s, 198.50Kb/s ]
queue  traffic_out bandwidth 1.50Mb qlimit 200 {other_out, ssh_out,
http_out, ftp_control_out, ftp_data_out}
 [ pkts:  0  bytes:  0  dropped pkts:  0 bytes:  0 ]
 [ qlength:   0/200  borrows:  0  suspends:  0 ]
 [ measured: 0.0 packets/s, 0 b/s ]
queue   other_out bandwidth 100Kb qlimit 200 cbq( borrow default )
 [ pkts:  3  bytes:374  dropped pkts:  0 bytes:  0 ]
 [ qlength:   0/200  borrows:  0  suspends:  0 ]
 [ measured: 0.0 packets/s, 4.14 b/s ]
queue   ssh_out bandwidth 100Kb qlimit 200 cbq( borrow )
 [ pkts:  44763  bytes:2785126  dropped pkts:  0 bytes:  0 ]
 [ qlength:   0/200  borrows:  44016  suspends:  0 ]
 [ measured:   410.6 packets/s, 198.50Kb/s ]
queue   http_out bandwidth 200Kb qlimit 200 cbq( borrow )
 [ pkts:  0  bytes:  0  dropped pkts:  0 bytes:  0 ]
 [ qlength:   0/200  borrows:  0  suspends:  0 ]
 [ measured: 0.0 packets/s, 0 b/s ]
queue   ftp_control_out bandwidth 100Kb qlimit 200 cbq( borrow )
 [ pkts:  0  bytes:  0  dropped pkts:  0 bytes:  0 ]
 [ qlength:   0/200  borrows:  0  suspends:  0 ]
 [ measured: 0.0 packets/s, 0 b/s ]
queue   ftp_data_out bandwidth 1Mb qlimit 200
 [ pkts:  0  bytes:  0  dropped pkts:  0 bytes:  0 ]
 [ qlength:   0/200  borrows:  0  suspends:  0 ]
 [ measured: 0.0 packets/s, 0 b/s ]
queue  traffic_in bandwidth 1.50Mb qlimit 200 {other_in, ssh_in,
http_in, ftp_control_in, ftp_data_in}
 [ pkts:  0  bytes:  0  dropped pkts:  0 bytes:  0 ]
 [ qlength:   0/200  borrows:  0  suspends:  0 ]
 [ measured: 0.0 packets/s, 0 b/s ]
queue   other_in bandwidth 100Kb qlimit 200 cbq( borrow )
 [ pkts:  0  bytes:  0  dropped pkts:  0 bytes:  0 ]
 [ qlength:   0/200  borrows:  0  suspends:  0 ]
 [ measured: 0.0 packets/s, 0 b/s ]
queue   ssh_in bandwidth 100Kb qlimit 200 cbq( borrow )
 [ pkts:  0  bytes:  0  dropped pkts:  0 bytes:  0 ]
 [ qlength:   0/200  borrows:  0  suspends:  0 ]
 [ measured: 0.0 packets/s, 0 b/s ]
queue   http_in bandwidth 200Kb qlimit 200 cbq( borrow )
 [ pkts:  0  bytes:  0  dropped pkts:  0 bytes:  0 ]
 [ qlength:   0/200  borrows:  0  suspends:  0 ]
 [ measured: 0.0 packets/s, 0 b/s ]
queue   ftp_control_in bandwidth 100Kb qlimit 200 cbq( borrow )
 [ pkts:  0  bytes:  0  dropped pkts:  0 bytes:  0 ]
 [ qlength:   0/200  borrows:  0  suspends:  0 ]
 [ measured: 0.0 packets/s, 0 b/s ]
queue   ftp_data_in bandwidth 1Mb qlimit 200
 [ pkts:  0  bytes:  0  dropped pkts:  0 bytes:  

Re: pf queue monitoring

2006-08-23 Thread tony sarendal
On 23/08/06, tony sarendal <[EMAIL PROTECTED]> wrote:
>
>
>
> On 23/08/06, Julien TOUCHE <[EMAIL PROTECTED]> wrote:
> >
> > tony sarendal wrote on 22/08/2006 08:32:
> > > I wrote a script to generate graphs for the queues using python and
> > > rrdtool a while back when I needed it, although it only works with
> > > CBQ. http://www.prefixmaster.com/eyeonpf.php
> > >
> >
> > awesome tool. i try it yesterday evening and it is really simple to make
> > it work.
> >
> > two questions:
> > - is it possible or plan to make it work on a remote system: maybe
> > generate data on a host and graph only on other ?
>
>
> How about rsyncing the web directory to the remote server ?
>
> - is there a way to debug label graph. some don't work for me (no graph;
> > labels are correctly listed)
>
> some other errors, are for label with [<>] characters (if
> > tftp_stuff:$dstaddr and dst_addr is a table or with ports like x<>w)
> > other has nothing special in label name like "string:port"
> >
> >
> can you send me the rules and a "pfctl -sl" ?
> I implemented that stuff really quickly and only tried it on a handfull
> of firewalls. The label stats are nice but has one drawback, since a pfctl
> -f
> clears the label stats and after that doesn't update existing sessions it
> works poorly for rules with few long lasting sessions.
>
> /Tpny
>

I should not type and eat.

"/Tpny"


-- 
Tony Sarendal - [EMAIL PROTECTED]
IP/Unix
   -= The scorpion replied,
   "I couldn't help it, it's my nature" =-



Re: pf queue monitoring

2006-08-22 Thread tony sarendal
On 23/08/06, Julien TOUCHE <[EMAIL PROTECTED]> wrote:
>
> tony sarendal wrote on 22/08/2006 08:32:
> > I wrote a script to generate graphs for the queues using python and
> > rrdtool a while back when I needed it, although it only works with
> > CBQ. http://www.prefixmaster.com/eyeonpf.php
> >
>
> awesome tool. i try it yesterday evening and it is really simple to make
> it work.
>
> two questions:
> - is it possible or plan to make it work on a remote system: maybe
> generate data on a host and graph only on other ?


How about rsyncing the web directory to the remote server ?

- is there a way to debug label graph. some don't work for me (no graph;
> labels are correctly listed)

some other errors, are for label with [<>] characters (if
> tftp_stuff:$dstaddr and dst_addr is a table or with ports like x<>w)
> other has nothing special in label name like "string:port"
>
>
can you send me the rules and a "pfctl -sl" ?
I implemented that stuff really quickly and only tried it on a handfull
of firewalls. The label stats are nice but has one drawback, since a pfctl
-f
clears the label stats and after that doesn't update existing sessions it
works poorly for rules with few long lasting sessions.

/Tpny

-- 
Tony Sarendal - [EMAIL PROTECTED]
IP/Unix
   -= The scorpion replied,
   "I couldn't help it, it's my nature" =-



Re: pf queue monitoring

2006-08-22 Thread Julien TOUCHE
tony sarendal wrote on 22/08/2006 08:32:
> I wrote a script to generate graphs for the queues using python and 
> rrdtool a while back when I needed it, although it only works with 
> CBQ. http://www.prefixmaster.com/eyeonpf.php
> 

awesome tool. i try it yesterday evening and it is really simple to make
it work.

two questions:
- is it possible or plan to make it work on a remote system: maybe
generate data on a host and graph only on other ?
- is there a way to debug label graph. some don't work for me (no graph;
labels are correctly listed)
some other errors, are for label with [<>] characters (if
tftp_stuff:$dstaddr and dst_addr is a table or with ports like x<>w)
other has nothing special in label name like "string:port"

thanks a lot for this great script
Regards

Julien



Re: pf queue monitoring

2006-08-21 Thread tony sarendal
On 22/08/06, Michal Soltys <[EMAIL PROTECTED]> wrote:
>
> Lawrence Horvath wrote:
> > Is there a way to monitor how much traffic is passing through a queue in
> > bps?
>
>
I wrote a script to generate graphs for the queues using python and rrdtool
a while back when I needed it, although it only works with CBQ.
http://www.prefixmaster.com/eyeonpf.php

/Tony S

-- 
Tony Sarendal - [EMAIL PROTECTED]
IP/Unix
   -= The scorpion replied,
   "I couldn't help it, it's my nature" =-



Re: pf queue monitoring

2006-08-21 Thread Michal Soltys

Lawrence Horvath wrote:
Is there a way to monitor how much traffic is passing through a queue in 
bps?


Besides pfctl -vvsq, try pftop from ports - it's great pf monitor, similar 
in use to top.




Re: pf queue monitoring

2006-08-21 Thread Jeff Quast

On 8/21/06, Lawrence Horvath <[EMAIL PROTECTED]> wrote:

Is there a way to monitor how much traffic is passing through a queue in bps?
Im using 'pfctl -s queue -v' but it seems to only show a running total
of packets and bits that have passed through it, and i want to be able
to see it in bps anyone know of a way to do this?

# uname -a
OpenBSD localhost.localdomain 3.9 GENERIC.MP#598 i386

thanks

--
-Lawrence



did you try adding a second -v?

pfctl -vvs queue



pf queue monitoring

2006-08-21 Thread Lawrence Horvath

Is there a way to monitor how much traffic is passing through a queue in bps?
Im using 'pfctl -s queue -v' but it seems to only show a running total
of packets and bits that have passed through it, and i want to be able
to see it in bps anyone know of a way to do this?

# uname -a
OpenBSD localhost.localdomain 3.9 GENERIC.MP#598 i386

thanks

--
-Lawrence



Re: pf queue

2006-01-24 Thread Reyk Floeter
On Mon, Jan 23, 2006 at 10:00:17PM -0500, Axton wrote:
> Is there a capability with pf to send packets to userspace for
> handling/manipulation, whereby they can be returned back to the
> kernel, similar to the queue facilities available in iptables?
> 

no.

but for IP-based connections you could use transparent proxying, like
spamd(8) or ftp-proxy(8).

you can also use the bpf(4) interface and the BIOCGFILDROP option to
queue packets to userspace and to drop them in the receive path, which
is currently used by dhclient. there is now way to do packet-based
dropping or to return them back to kernel space at the moment.
nevertheless, i like the bpf approach because it also works for layer
2 traffic and for non-ethernet data link types (like 802.11).

reyk



pf queue

2006-01-23 Thread Axton
Is there a capability with pf to send packets to userspace for
handling/manipulation, whereby they can be returned back to the
kernel, similar to the queue facilities available in iptables?

Axton



Re: PF Queue problem

2005-11-18 Thread viq
On Friday 18 of November 2005 16:58, knitti wrote:
> I don't know valknut (or the DirectConnect protocol), but if it is
> similiar to other p2p filesharing protocols, then you do also uploads
> on your outbound connection, e.g. if you connect to another host, and
> this host happens to want something from you, you will upload it over
> your outbound connection.
> at least this case is not covered by the rules you posted.

Good point. But after a while ALL traffic goes into the q_def queue. I guess 
the only thing that could help is forcing valknut to use only certain ports 
for outgoing connections... Which i have no idea how to do right now :( 

Snippet from pfctl -vv -s queue :

queue q_pri priority 7 
  [ pkts:  63759  bytes:4304556  dropped pkts:  0 bytes:  0 ]
  [ qlength:   0/ 50 ]
  [ measured: 9.7 packets/s, 5.17Kb/s ]
queue q_def priority 3 qlimit 70 priq( red ) 
  [ pkts:  85507  bytes:  118509579  dropped pkts:786 bytes: 1052782 ]
  [ qlength:  11/ 70 ]
  [ measured:23.5 packets/s, 261.71Kb/s ]
queue q_p2p_high priority 2 qlimit 70 priq( red default ) 
  [ pkts:   7340  bytes: 411376  dropped pkts:222 bytes:  13644 ]
  [ qlength:  10/ 70 ]
  [ measured: 0.4 packets/s, 200 b/s ]
queue q_p2p_low qlimit 250 priq( red ) 
  [ pkts:   6618  bytes:8879271  dropped pkts:  0 bytes:  0 ]
  [ qlength:  13/250 ]
  [ measured: 0.5 packets/s, 5.19Kb/s ]

So some traffic IS assigned to that queue. Just it doesn't match any of the 
uploads...

-- 
viq

--
Okradaja nas na stacjach? >>> http://link.interia.pl/f18d4



Re: PF Queue problem

2005-11-18 Thread knitti
I don't know valknut (or the DirectConnect protocol), but if it is
similiar to other p2p filesharing protocols, then you do also uploads
on your outbound connection, e.g. if you connect to another host, and
this host happens to want something from you, you will upload it over
your outbound connection.
at least this case is not covered by the rules you posted.


--knitti



PF Queue problem

2005-11-18 Thread viq
Below is the snippet from my pf.conf with all relevant rules (i can paste the 
whole thing, just thought this will be easier to read).
The problem: when i start the program, the traffic from it gets properly 
queued as q_p2p_low. But after a while (10-20 minutes i think) it moves to 
q_def. Application is valknut, if that makes any difference, router is 
running OpenBSD-current (OpenBSD 3.8-current (GENERIC) #252: Tue Nov 15 
22:22:13 MST 2005). Any ideas what could be happening, and how to solve it?

Thanks in advance.

pf.conf snippet:

altq on $ext_if priq bandwidth 272Kb queue { q_pri, q_def, q_p2p_low, 
q_p2p_high }
queue q_pri priority 7
queue q_def priority 3 priq(red) qlimit 70
queue q_p2p_high priority 2 priq(default, red) qlimit 70
queue q_p2p_low priority 1 priq(red) qlimit 250

<..>

rdr on $ext_if proto { tcp, udp } from any to port $p2p_port \
-> $internal_ip port $p2p_port

pass out keep state queue (q_def, q_pri)

pass quick on { lo0 lo $int_if }
antispoof quick for { lo0 lo $int_if }
pass in on $ext_if proto { tcp, udp} to $internal_ip port $p2p_port keep state 
queue (q_p2p_low, q_p2p_high)

-- 
viq

--
Okradaja nas na stacjach? >>> http://link.interia.pl/f18d4