Re: [LARTC] How many (htb) tc classes and qdiscs are too many?

2005-06-03 Thread Szymon Miotk

Spencer wrote:

We have a Linux box that is acting as the gateway to the internet for about
400 people, typically there are not more then 50 of them using the internet
at any given time.  We would like to provide different levels of access to
different users.  For example 128kbps to some users and 256kbps to others.
We have considered creating a class and qdisc for each user (using htb)
however we don't know how much overhead creating 50-200 classes and
qdiscs would involve, would this put too much strain on the Linux box?  Is
it
better to create fewer classes and qdisc and assign multiple users to each?
I haven't been able to find any test on maximum effect number of qdiscs, but
it could be I have just been looking in the wrong place.  If any one has any
ideas or could point me in the right direction it would be greatly
appreciated.


I have P4 3.0 GHz, 1 GB RAM.
I have 3500 potential users (top load about 800 users, average 400). I 
have 3 interfaces (2 WAN + 1 LAN), so I have 10500 queues total (3500 on 
each interface).

The traffic is 24Mbit max, average 20Mbit.

Without u32 hashing my box run at 60-70% CPU utilization. After applying 
hashing the box is running with 25% top utilization, average 15%.


The two thing you must remember when running a box for many users:
* use iptables chains. I prefer chains of 30-40 entries.
* use u32 hashing.
This will greatly improve CPU utilization. About 500-1000% in my case.

Szymon Miotk
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] ARP queries generating entries in routing cache

2005-03-18 Thread Szymon Miotk
Hello!
I've noticed a strange thing: when a client system generates an arp 
query for an unexistent host, the routing cache entry is being made.

My system is Fedora 2 with vanilla 2.6.11.
the client is 10.1.1.2 with mask 255.255.0.0
the router/firewall is 10.1.1.1 with mask 255.255.255.0
Yes, the masks are different and this cannot be fixed easily.
So, when the client generates ARP query for an unexistent host in 
10.1.1.0/24 network everything is fine - query is dropped.
But when it asks for something like 10.1.44.4, then the router drops the 
query, but an entry in routing cache is being made.

This is a serious problem, because when someone has a virus which tries 
to spread itself, it generates thousands ARP queries per second and my 
routing cache overflows and the traffic crawls.

did anybody meet such a problem?
Szymon Miotk
PS. The routing is configured ok. No incompletes are in arp cache, 
only routing cache is being affected.
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] Huge system load using HTB

2004-10-06 Thread Szymon Miotk
Hi!
I have some problems with htb performance.
THE SETUP:
I have a network with 3 ISP uplinks and 1 local network uplink.
There are about 1700 clients.
I was shaping their bandwidth with HTB using iptables mangling in a manner:
tc class add dev $DEV parent 1:10 classid 1:${CLASS_ID} htb rate \
16kbit ceil 512kbit burst 2kb prio 2 quantum 1500
tc qdisc add dev $DEV parent 1:${CLASS_ID} handle ${CLASS_ID}: \
sfq perturb 10
tc filter add dev $DEV parent 1: protocol ip prio 17 u32 \
match ip dst $IP flowid 1:${CLASS_ID}
iptables -A $CHAIN_NAME -t mangle -s $IP -j MARK --set-mark $CLASS_ID
I use iptables subchains, so that every chains contains 32 entries.
I have recently upgraded from RedHat 9.0 to Fedora Core 2. I cannot turn 
back to RH9, because I had other problems with that.
I use kernel 2.6.8-1.521 (the problem was the same with original 
kernel). I didn't recompile it.

THE PROBLEM:
When I load my rules the system load jumps to 100%.
I was testing it and I am certain that HTB does the mess.
The server with all iptables rules (including mangling) works well with 
load about 3%.
But just as I turn HTB on it starts to crawl.
The chart can be found here:
http://mtower.mlyniec.gda.pl/~spam/tst.png

It's fairly strong machine (P4 2.8 with HT, 1 GB RAM) and it worked with 
that setup quite well for half a year (system load never exceeded 30-40%).

I guess I haven't noticed something or the kernel has the bug.
Anybody clues?
Szymon Miotk
___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] Preferring routing cache over ip rule

2004-03-04 Thread Szymon Miotk
Hello!
I have problem setting up some ip routing rules.
Let's assume following network configuration:
  ___
server: 10.1.1.1 |   eth1|-- ISP1
internal |eth0   |
10.1/16  |   eth2|-- ISP2
10.2/16  |   |
 |   eth3|-- ISP3
 |___|
And following ip route rules
All traffic from 10.1.1.1 via ISP1
All traffic from 10.1/16 via ISP2
All traffic form 10.2/16 via ISP3
So users' connections go via IPS2 or ISP3 and ISP1 is dedicated link for 
the server.

It's quite good, but I have a problem I cannot overcome:

When someone from the Internet wants to connect to 10.1.1.1 by the ISP2, 
the return packets goes via ISP1.
And of course someone from the Internet drops those packets, as they 
don't come from the right IP address.

I would like to have the followin:
1. traffic from 10.1.1.1 goes via ISP1
2. traffic from 10.1 goes via ISP2
3. traffic from 10.2 goes via ISP3
4. 10.1.1.1 is accessible via every ISP
# ip route show cache | grep GUEST_IP
10.1.1.1 from GUEST_IP dev eth0  src ISP2
GUEST_IP from 10.1.1.1 via ISP1 dev eth1  src eth0_ip_address
How do I prefer routing cache over ip rule?

So let's explain it more clearly on a drawing:

Situation now:
Guest is confused, because he gets packets from the wrong IP address
   ___
  |   |
  | |eth1|--ISP1|
internal ---|eth0   |   guest (very confused)
10.1.1.1  | ^eth2|---ISP2---|
  |   |
  |   eth3|-- ISP3
  |___|
Thanks!
Szymon Miotk
___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] Be careful with RedHat updates!

2003-11-26 Thread Szymon Miotk
The latest RH update for iproute doesn't support HTB
I use Fedora's iproute instead. (2.4.7-11)
Szymon Miotk

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] u32 filter and NAT

2003-05-30 Thread Szymon Miotk
I want to limit each user in my network to have limited bandwidth (let's 
 say 256/128 kbit).
I use NAT (done with iptables).
Can I limit users on the outgoing interface using u32 using rules like:

tc filter add dev eth0 parent 1: protocol ip prio 17 u32 match ip src 
10.10.10.10 flowid 1:10

It seem I made a mistake somewhere or NAT is done before routing and I 
must use iptables mangling. BTW what is the maximum for --set-mark ?

Thanks!

Szymon Miotk

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] Intelligent P2P detection

2003-03-31 Thread Szymon Miotk
Luman wrote:
Probably, I'm not the first one who needs solve problem with p2p.
Because, large part of my traffic is eaten by p2p software like KazAA,
e-mule, Direct Connect etc, I'm looking for the way of detection of such
traffic and marking it. However simple way with for instance 1214 port
for KazAA doesn't work because this software uses floating port
technology. This traffic can be send via different ports and these ports
can change in the fly. This is rather well known. 
So I'm looking for the stuff working at higher level and analyzing
traffic inside to determine the content and the real protocol. It could
be a patch to the kernel or whatever. It should only be able to mark
packet by a special marker. 

I need this solution not only to prioritizing the traffic (prioritizing
can be achieve in other way) but also to selection the Internet link. I
want to NAT this low quality data for some specific address in order to
send it over cheaper link. 

What do you think is there any solution to do it? Or maybe there is
ongoing project trying to tackle with this global problem with detection
p2p traffic.
Snort has set of rules to detect P2P traffic. AFAIK snort is quite fast, 
at least fast enough to cope with 10Mbits on average PC.
Maybe the solution is detecting snort alerts about P2P and automagically 
cutting bandwidth of host playnig with P2P?

Szymon Miotk



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] Strange routing limitations and workaroud

2003-02-06 Thread Szymon Miotk
Hi!

I got some strange problem with routing loadbalancing.
I cannot get the full speed from my ISPs until I get some big files from 
close ftp server.
I have server with one connection to internal network and 3 to ISPs:


   __
  | eth1| ISP1
  | |
internal--|eth0 eth2| ISP2
net   | |
(~300 | eth3| ISP3
 hosts|_|

I have done everything as described in LARTC, chapter 4.2

The main rule looks like that (the weight reflects link speed/100):
ip route add default scope global \
   nexthop via $ISP1_GATEWAY dev eth1 weight 12 \
   nexthop via $ISP2_GATEWAY dev eth2 weight 10 \
   nexthop via $ISP3_SPRINT_GATEWAY dev eth3 weight 20

Total bandwidth available is 4.2 Mbit.
After I restart the server I can get 2.0Mbit maximum, with first link 5% 
utilized, and the second link and the third about 50%.
When I get some big files from close ftp server (4 x linux kernel = 
~80MB) the links start to work normal, reaching 75-100% utilization. All 
those big files go via the first link.

Can someone possibly explain that and teach me how to get full speed 
without such shamanism?

Szymon Miotk

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/