Re: altq priq Anomaly?

2005-06-23 Thread Stefan Zill

Jon Hart wrote:

On Thu, Jun 23, 2005 at 07:39:41AM -0400, Melameth, Daniel D. wrote:

The TCP ACKs are not the issue.  The issue is I never get more than
half of what I set the bandwidth value to.


I've never been able to get exactly the bandwidth I specified in my
pf.conf altq rules.


I almost always get, what I specify in the altq-definitions. Yet, Once I had 
such an issue. I ran some kind of local proxy, which connected from my 
(tun0) to (tun0) and later this data left through tun0 on a different 
connection. The first connection, although not physically passing through 
the external interface, consumed altq-bandwidth. Maybe you have a similiar 
kind of issue, such that your traffic passes through altq twice.
I removed my problem by simply using (lo0) to (lo0) connections, instead of 
(tun0) to (tun0). Maybe this behaviour can be considered a bug, maybe it can 
be removed by using better pf-rules.


HTH
Stefan 



Re: Traffic shapping Download and Upload

2004-10-18 Thread Stefan Zill
Miroslav Kubik wrote:
Hi
Hi
I have to set up traffic shapping for clients in LAN. Every client
needs 256Kbit download speed and 128Kb upload speed. But I don't know
how to do it. Clients use NAT for Internet access so I can't limit
outgoing traffic on ext. interface for local IPs in LAN.
Can you help me?
I hope so. :)
First of all, you have to establish the necessary queues (for example):
#create external queue tree
queue on $ExtIF bandwidth  cbq queue 
{client1_ext, client2_ext, client3_ext, ext}

# one subqueue-tree per client
queue client1_ext bandwidth 128Kb cbq(red borrow) priority 2
queue client2_ext bandwidth 128Kb cbq(red borrow) priority 2
queue client3_ext bandwidth 128Kb cbq(red borrow) priority 2
#a default queue using the remaining bandwidth
queue ext bandwidth 90% cbq(red borrow default) priority 1
# separate data traffic from empty ACKs and low-delay packets
queue client1_ext_data bandwidth 75% cbq(red borrow) priority 2
queue client1_ext_ack  bandwidth 25% cbq(red borrow) priority 3
queue client2_ext_data bandwidth 75% cbq(red borrow) priority 2
queue client2_ext_ack  bandwidth 25% cbq(red borrow) priority 3
queue client3_ext_data bandwidth 75% cbq(red borrow) priority 2
queue client3_ext_ack  bandwidth 25% cbq(red borrow) priority 3
#create internal queue tree
queue on $IntIF bandwidth  cbq queue 
{client1_int, client2_int, client3_int, int}

queue int bandwidth 90% cbq(red borrow default) priority 1
queue client1_int bandwidth 256Kb cbq(red borrow) priority 2
queue client2_int bandwidth 256Kb cbq(red borrow) priority 2
queue client3_int bandwidth 256Kb cbq(red borrow) priority 2
queue client1_int_data bandwidth 75% cbq(red borrow) priority 2
queue client1_int_ack  bandwidth 25% cbq(red borrow) priority 3
queue client2_int_data bandwidth 75% cbq(red borrow) priority 2
queue client2_int_ack  bandwidth 25% cbq(red borrow) priority 3
queue client3_int_data bandwidth 75% cbq(red borrow) priority 2
queue client3_int_ack  bandwidth 25% cbq(red borrow) priority 3
# nat and tag the different clients packets
nat on $ExtIF inet from $Client1 to any tag client1 -> $ExtIP
nat on $ExtIF inet from $Client2 to any tag client2 -> $ExtIP
nat on $ExtIF inet from $Client3 to any tag client3 -> $ExtIP
# assign the differently tagged packets to the appropriate queues
pass out on $ExtIF inet all tagged client1 
queue(client1_ext_data,client1_ext_ack) 
queue(client1_int_data,client1_int_ack) keep state
pass out on $ExtIF inet all tagged client2 
queue(client2_ext_data,client2_ext_ack) 
queue(client2_int_data,client2_int_ack) keep state
pass out on $ExtIF inet all tagged client3 
queue(client3_ext_data,client3_ext_ack) 
queue(client3_int_data,client3_int_ack) keep state

#end
This ruleset is not tested at all, but you should get the idea how it is 
supposed to work.
This ruleset assumes that all your internal clients are attached to a single 
NIC. Multiple internal NICs cannot be set to borrow another clients data 
rate when it does not exhaust it. Furthermore you cannot directly control 
the download data rate of any of the clients. By queueing the traffic on the 
internal NIC most servers will throttle their data rate, but you cannot 
guarantee anything.

HTH
Stefan


Re: NAT - PF order

2003-09-12 Thread Stefan Zill
Shadi Abou-Zahra wrote:
> hello,

Hi,

> here are my questions:
> 1. NATing always happens before PF rules are applied. correct?

This is correct.

> 2. if all the NATing happens on NIC_A, why do i get such entries in my
> state table when an internal desktop tries to reach a server in DMZ 1:
> 192.168.0.13 -> 123.123.0.1 -> 123.123.0.13
> (ie. the private address is translated to the external bridge IP!)

The NATing actually happens before the packets are parsed on the INCOMING
interface, here NIC_B. You said you had a rule NATing packets from your
internal network to the internet. So possibly you did not specify, not to
translate packets for other internal networks. (try "no nat on ...")

> 3. my understanding is that a packet from an internal desktop (ie.
> 192.168.0.13) to an internal server (ie. 10.0.0.13) would PASS IN ON
> NIC_B and then PASS OUT ON NIC_C but it doesn't seem to behave that
> way. did i get something wrong?

I think your logic is correct here. No idea what's going wrong here.

> 4. equally, a server on DMZ 1 trying to reach a service on DMZ 2
> should PASS IN ON NIC_D and PASS OUT ON NIC_E but the packets seem to
> be going through NIC_A as well. does this make any sense or do i have
> a terribly bad setup?

Actually I still have no clue how you are going to route any packets through
your interfaces D and E, but I'm not _that_ experienced.

> 5. finally, is there any way to reach an internal server (ie.
> 10.0.0.13) through a "real" IP from both outside (NIC_A) and inside
> (NIC_B)?

Sure there is. Try "rdr in on {$NIC_A , $NIC_B} from any to $serverIP port
1:65535 -> 10.0.0.13 port 1:*" or something similar. From my understanding,
that should work.

HTH
Stefan




Re: Basic pfctl question

2003-08-11 Thread Stefan Zill
Marc Eggenberger wrote:
> My basic question now. When I want certain traffic to be allowed to
> come in to the internal network, do I have to allow it on both
> Interfaces (hme0, hme3). Outgoing from hme3 shouldn't be restriced.
> I'm a bit confused with all those nat examples.

The default policy (without any rule) is "pass in all", "pass out all". So
you might choose to filter traffic on one of your interfaces and allow
everything on the other interface.
If you choose to filter on both of your interfaces, you will have to allow
the packets in on the first interface and out on the other one (and vice
versa).

HTH
Stefan