Re: [LARTC] Tocken Bucket with priority?

2006-04-05 Thread Andreas Hasenack
On Wed, Apr 05, 2006 at 03:18:06PM +0200, Emanuele Colombo wrote:
 Hi. I'm trying to get a traffic shaper like this:
 
 
 --
   VoIP pkts--||_|
 --  \   |
  ---O -
 --  /
   Data pkts--|
 --
 
 In this shaper voip packets are in a different queue than any other kind of
 packet. I want a data packet to be served only when no packets are in the
 voip queue (when voip queue is empty).
 Furthermore the total traffic that leaves this shaper needs to be limited to
 a specific (and precise) value of bandwidth, like a token bucket.
 
 
 I can't use something like this (PRIO + TBF) because in this way when data
 congestion happens, voip packets may be lost too(packet drop appens on the
 TBF queue):
 
 --
   VoIP pkts--|   |_|
 --  \ -|
  O ---|---O -
 --  / -
   Data pkts--|
 --
 
 I also can't use HTB because it doesn't provide a priority mechanism like my
 needs, and CBQ because his bandwidth limiting algorithm isn't very precise
 (according to the documentation).

What about using HTB and *then* using PRIO as its leaf class? You would
use HTB only to shape.


___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] multipath algorithm

2006-03-15 Thread Andreas Hasenack
On Wed, Mar 15, 2006 at 04:22:06PM +0100, Eduardo Fernández wrote:
 Seems really interesting, I also noticed this in kernel sources but I
 didn't try it since I didn't see it in any howtos out there. I'm using
 load balancing with Julian's patch for 5 dsl lines but I will be
 adding 20 more in short, so I may try out this. BTW Julian's patch
 didn't work for me with 2.6 kernel, but it did with 2.4.29.

I never used those patches. For me, not being included in the mainstream
kernel for all those years has to mean something is broken somewhere.

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] multipath algorithm

2006-03-13 Thread Andreas Hasenack
I've been reading about multipath routes and found something that no howto I 
saw mentioned so far: multipath algorithms.

The kernel has the followings:
# zgrep MULTIPATH_ /proc/config.gz
CONFIG_IP_ROUTE_MULTIPATH_CACHED=y
CONFIG_IP_ROUTE_MULTIPATH_RR=m
CONFIG_IP_ROUTE_MULTIPATH_RANDOM=m
CONFIG_IP_ROUTE_MULTIPATH_WRANDOM=m
CONFIG_IP_ROUTE_MULTIPATH_DRR=m
CONFIG_DM_MULTIPATH_EMC

iproute2 also has support for these (at least, it passed them forward to the 
kernel):
static char *mp_alg_names[IP_MP_ALG_MAX+1] = {
[IP_MP_ALG_NONE] = none,
[IP_MP_ALG_RR] = rr,
[IP_MP_ALG_DRR] = drr,
[IP_MP_ALG_RANDOM] = random,
[IP_MP_ALG_WRANDOM] = wrandom
};

The ip route add option is mpath. I quickly tried with an adsl modem on 
ppp0 and dialup one on ppp1 and using drr seems to have worked, tcpdump 
showed locally originated traffic going out both interfaces.

Anybody else tried these and care to comment?
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] negative token/ctokens

2006-03-13 Thread Andreas Hasenack
In this simple htb setup:
# tc -s -d class ls dev eth0
class htb 1:1 root rate 30bit ceil 30bit burst 1749b/8 mpu 0b overhead 
0b cburst 1749b/8 mpu 0b overhead 0b level 7
 Sent 13171835 bytes 13169 pkt (dropped 0, overlimits 0 requeues 0)
 rate 45848bit 10pps backlog 0b 0p requeues 0
 lended: 5272 borrowed: 0 giants: 0
 tokens: -84429 ctokens: -84429

class htb 1:2 parent 1:1 prio 0 quantum 1500 rate 8bit ceil 30bit 
burst 1639b/8 mpu 0b overhead 0b cburst 1749b/8 mpu 0b overhead 0b level 0
 Sent 12243472 bytes 8787 pkt (dropped 0, overlimits 0 requeues 0)
 rate 43264bit 6pps backlog 0b 0p requeues 0
 lended: 3515 borrowed: 5272 giants: 0
 tokens: -181860 ctokens: -86779

class htb 1:3 parent 1:1 leaf 30: prio 0 quantum 2750 rate 22bit ceil 
30bit burst 1709b/8 mpu 0b overhead 0b cburst 1749b/8 mpu 0b overhead 0b 
level 0
 Sent 928363 bytes 4382 pkt (dropped 0, overlimits 0 requeues 0)
 rate 3400bit 4pps backlog 0b 0p requeues 0
 lended: 4382 borrowed: 0 giants: 0
 tokens: 61291 ctokens: 46039

class prio 30:1 parent 30:

class prio 30:2 parent 30:

class prio 30:3 parent 30:

What does it mean when the leaf 1:2 class has a negative token/ctoken count? 
I'm generating this traffic with a wget --limit-rate=5000 command.
My understanding is that this indicates the number of tokens available for 
burst traffic, is that correct? How did it become negative? I thought that 
when the bucket is empty, traffic is delayed until a token shows up, or 
eventually dropped.
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Balancing multiple connections and NAT

2006-03-05 Thread Andreas Hasenack
Em Qui 23 Fev 2006 20:41, Markus Schulz escreveu:
 you need a patch for NAT processing with multiple gateways. this will
 then save the routing information for each connection inside NAT
 structures, so that each packet of an established connection will be
 get routed over the same gateway. you can find the patches here:
 http://www.ssi.bg/~ja/#routes
 please read the guides (nano howto or dgd-usage) carefully.

Any idea why these patches are not yet integrated into the upstream kernel?
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Patch to allow for the ATM cell tax

2006-03-03 Thread Andreas Hasenack
On Thu, Mar 02, 2006 at 07:27:13PM -0500, Jason Boxman wrote:
 Any chance something like this can be applied to q_tbf?  It's been classful 
 for a while and I find a tbf with a prio under it works quite well for my 

tbf qdisc is classfull?

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Patch to allow for the ATM cell tax

2006-03-03 Thread Andreas Hasenack
On Fri, Mar 03, 2006 at 11:18:00AM -0500, Jason Boxman wrote:
 On Friday 03 March 2006 08:43, Andreas Hasenack wrote:
  On Thu, Mar 02, 2006 at 07:27:13PM -0500, Jason Boxman wrote:
   Any chance something like this can be applied to q_tbf?  It's been
   classful for a while and I find a tbf with a prio under it works quite
   well for my
 
  tbf qdisc is classfull?
 
 It has been since like 2.6.9, yes.  I was as surprised as you, but I use it 
 with a leaf prio all the time and have for a year now.

If this is correct, then the docs are really in bad shape. They are not only
outdated, but just plain wrong in many cases.

But tbf is still not your regular classfull qdisc, or I'm missinterpreting 
things:

# tc qdisc add dev eth0 handle 1: root tbf rate 300kbit burst 10k latency 10ms
# tc class add dev eth0 classid 1:1 parent 1: tbf
Error: Qdisc tbf is classless.

or

# tc qdisc add dev eth0 handle 1: root tbf rate 300kbit burst 10k latency 10ms
# tc class add dev eth0 classid 1:1 parent 1: prio
Error: Qdisc prio is classless.

I'm using iproute2-2.6.15 and kernel-2.6.12

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Patch to allow for the ATM cell tax

2006-03-03 Thread Andreas Hasenack
On Fri, Mar 03, 2006 at 01:45:31PM -0500, Jason Boxman wrote:
 Andreas Hasenack said:
  On Fri, Mar 03, 2006 at 11:18:00AM -0500, Jason Boxman wrote:
  On Friday 03 March 2006 08:43, Andreas Hasenack wrote:
   On Thu, Mar 02, 2006 at 07:27:13PM -0500, Jason Boxman wrote:
Any chance something like this can be applied to q_tbf?  It's been
 classful for a while and I find a tbf with a prio under it works
 quite well for my
  
   tbf qdisc is classfull?
 
  It has been since like 2.6.9, yes.  I was as surprised as you, but I use it
  with a leaf prio all the time and have for a year now.
 
  If this is correct, then the docs are really in bad shape. They are not
 only outdated, but just plain wrong in many cases.
 
 Yes.
 
  But tbf is still not your regular classfull qdisc, or I'm missinterpreting
 things:
 
 tc qdisc add dev eth0 root handle 1: tbf rate ${RATE}kbit \
   burst 1600 limit 1
 tc qdisc add dev eth0 parent 1:1 handle 2: prio bands 4
 tc qdisc add dev eth0 parent 2:1 handle 10: pfifo limit 10
 tc qdisc add dev eth0 parent 2:2 handle 20: pfifo limit 10
 tc qdisc add dev eth0 parent 2:3 handle 30: pfifo limit 10
 tc qdisc add dev eth0 parent 2:4 handle 40: tbf rate \
   $(($RATE-32))kbit burst 1600 limit 1
 tc qdisc add dev eth0 parent 40:1 handle 33: sfq perturb 1
 
 But, you're right.  Classful is probably the wrong way of saying it.
 
 Perhaps I meant you can attach a different queueing disipline besides using
 tbf.  It's more like tbf has a nested bfifo attached, which you can replace
 with anything you want since around 2.6.9.
 
 I guess I'm used to using prio and tbf, where you can attach various leaf
 qdiscs and have more leaf qdiscs attached.  It's certainly not the same
 thing as cbq, htb, or hfsc.  Oops.  My bad.

Thanks for the example and the explanation, it was very helpful. It also
means I can try new things :)

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Htb queueing problem

2006-03-01 Thread Andreas Hasenack
On Wed, Mar 01, 2006 at 02:48:18PM +, Andy Furniss wrote:
 than bulk. Also remember when setting rates that htb will see ip packets 
 as ip length + 14 when used on ethX

Could you elaborate on this a bit?
I suppose you also meant this in an earlier message when you mentioned
that the overhead was not included in the bw calculations.

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] why isn't 1:1 getting the traffic? [filter question]

2006-02-24 Thread Andreas Hasenack
With the below script, whenever I ping 10.0.16.10 (which matches the
only filter I have), traffic still get's sent to the default 1:2 class
instead of 1:1 and I don't know why... Any hints?

(kernel 2.6.12, iproute2-2.6.15)

tc qdisc del dev eth0 root  /dev/null 21
tc qdisc add dev eth0 handle 1: root htb default 2
tc class add dev eth0 classid 1:1 parent 1: htb rate 100kbps ceil 100kbps 
quantum 1500
tc class add dev eth0 classid 1:2 parent 1: htb rate 90mbit ceil 90mbit quantum 
1500
tc qdisc add dev eth0 handle 2: parent 1:2 sfq perturb 10
tc class add dev eth0 classid 1:10 parent 1:1 htb prio 0 rate 30kbps quantum 
1500
tc qdisc add dev eth0 handle 10: parent 1:10 sfq perturb 10
tc class add dev eth0 classid 1:11 parent 1:1 htb prio 0 rate 70kbps ceil 
100kbps quantum 1500
tc qdisc add dev eth0 handle 20: parent 1:11 sfq perturb 10
tc class add dev eth0 classid 1:12 parent 1:1 htb rate 60kbps ceil 100kbps 
quantum 1500
tc qdisc add dev eth0 handle 30: parent 1:12 sfq perturb 10
tc filter add dev eth0 parent 1:0 prio 1 protocol ip u32 \
match ip dst 10.0.16.10/32 \
flowid 1:1


Status after pinging 10.0.16.10 a few times (notice traffic on 1:2, but not on 
1:1):
qdisc htb 1: r2q 10 default 2 direct_packets_stat 0 ver 3.17
 Sent 516 bytes 7 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 2: parent 1:2 limit 128p quantum 1514b flows 128/1024 perturb 10sec
 Sent 516 bytes 7 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 10: parent 1:10 limit 128p quantum 1514b flows 128/1024 perturb 10sec
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 20: parent 1:11 limit 128p quantum 1514b flows 128/1024 perturb 10sec
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 30: parent 1:12 limit 128p quantum 1514b flows 128/1024 perturb 10sec
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0

class htb 1:11 parent 1:1 leaf 20: prio 0 quantum 1500 rate 56bit ceil 
80bit burst 1669b/8 mpu 0b overhead 0b cburst 1699b/8 mpu 0b overhead 0b 
level 0
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0
 lended: 0 borrowed: 0 giants: 0
 tokens: 24429 ctokens: 17408

class htb 1:1 root rate 80bit ceil 80bit burst 1699b/8 mpu 0b overhead 
0b cburst 1699b/8 mpu 0b overhead 0b level 7
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0
 lended: 0 borrowed: 0 giants: 0
 tokens: 17408 ctokens: 17408

class htb 1:10 parent 1:1 leaf 10: prio 0 quantum 1500 rate 24bit ceil 
24bit burst 1629b/8 mpu 0b overhead 0b cburst 1629b/8 mpu 0b overhead 0b 
level 0
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0
 lended: 0 borrowed: 0 giants: 0
 tokens: 55636 ctokens: 55636

class htb 1:2 root leaf 2: prio 0 quantum 1500 rate 9Kbit ceil 9Kbit 
burst 12836b/8 mpu 0b overhead 0b cburst 12836b/8 mpu 0b overhead 0b level 0
 Sent 516 bytes 7 pkt (dropped 0, overlimits 0 requeues 0)
 rate 8bit 0pps backlog 0b 0p requeues 0
 lended: 7 borrowed: 0 giants: 0
 tokens: 1164 ctokens: 1164

class htb 1:12 parent 1:1 leaf 30: prio 0 quantum 1500 rate 48bit ceil 
80bit burst 1659b/8 mpu 0b overhead 0b cburst 1699b/8 mpu 0b overhead 0b 
level 0
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0
 lended: 0 borrowed: 0 giants: 0
 tokens: 28329 ctokens: 17408

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] tc filter can target only leaf classes?

2006-02-24 Thread Andreas Hasenack
(using htb)

I'm trying to learn tc filter and it seems the flowid parameter can only
point to leaf classes. Actually, it can point anywhere, but it doesn't
seem to work unless it points to a leaf class. Is this correct?

For example, I have this tree:

eth0
|
 +--1:---+
 |   |
 +--1:101:20
 |   |   |
1:301:4020:
 |   |
30: 40:

1: is htb qdisc, with default pointing to minor 20.

And this filter:

iptables -t mangle -A OUTPUT -d $DSTHOST -j MARK --set-mark 1
tc filter add dev $DEV parent 1:0 prio 1 protocol ip \
handle 1 \
fw \
flowid 1:10

Now, I only see 1:10 getting the traffic if 1:30 and 1:40 don't exist.
The moment I add 1:30, 1:40 and their qdiscs, the above filter stops
working and this same traffic starts going to 1:20, which is the default
set at 1:'s qdisc. 

Why does the filter stop working? I was expecting it to keep working and
then I could further filter this traffic into 1:30 and 1:40 *at* 1:10.

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] 1k: 1000 or 1024?

2006-02-23 Thread Andreas Hasenack
The docs[1][2] suggest it's 1024, but tc says something else:

# tc qdisc add dev eth0 root tbf rate 1kbps latency 50ms burst 1500

# tc -s qdisc ls dev eth0
qdisc tbf 8009: rate 8000bit burst 1499b lat 48.8ms
 ^^^
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 rate 0bit 0pps backlog 0b 0p requeues 0



If 1k were 1024, then I would have 8192bit above.


1. http://www.docum.org/docum.org/faq/cache/74.html
2.http://ds9a.nl/2.4Networking/howto/lartc.qdisc.html#LARTC.QDISC.EXPLAIN

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] userlevel should not need to know about HZ?

2006-02-23 Thread Andreas Hasenack
Kernel people tell me users should never need to know the value of HZ
used by the currently running kernel. One kernel hacker even told me
that Linus once changed the value from 100 to 1000 just to see user
space programs break.

However, it is needed for the buffer parameter in TBF. The tc-tbf(8)
manpage:

If your buffer is too small, packets may be dropped because more
tokens arrive per timer tick than fit in your bucket.  The mini-
mum buffer size can be calculated by dividing the rate by HZ.

My kernel (2.6.12), for example, doesn't have a CONFIG option in
/proc/config.gz. I only found out the correct HZ value by looking into
/usr/include/asm/param.h, and even there are two values: 1000 for
__KERNEL__ and 250 for the rest. Newer kernels have CONFIG options and
1000 is just one of the possible values.

So, how do we reliably calculate the minimum value for buffer/burst/maxburts?

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] userlevel should not need to know about HZ?

2006-02-23 Thread Andreas Hasenack
Em Qui 23 Fev 2006 19:47, Andreas Klauer escreveu:
  So, how do we reliably calculate the minimum value for
  buffer/burst/maxburts?

 Trial  Error, not that I ever had much luck with TBF though...

From my experiments, the minimum seems to be either MTU plus a few bytes or 
the result of rate/HZ, whichever is higher.
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] mysterious rebounce in htb

2006-02-22 Thread Andreas Hasenack
Attached is a graph obtained with ethereal where after time +/-45s there is
a rebounce which I can't explain.

Setup is this:
- my machine starts to generate traffic at maximum speed against a
  target machine (using nc  /dev/zero here and nc -l  /dev/null there)
- traffic pattern is:
   0s: dst port 2500 (red)
  20s: dst port 8000 (blue)
  40s: kill port 2500 traffic
  60s: kill port 8000 traffic
- htb is limiting that traffic to 100mbps at all times (see below for
  htb configuration)

Could that bounce be a result of some wrong configuration I have? Or
some other traffic interfering with my measurements? I used a host
10.0.16.10 filter in ethereal, and since the bounce is compensated in
the other traffic I don't think it was some external interference, but
who knows.6

htb config is created by this script. Note I created two root classes so
that my regular work on this desktop doesn't interfere with the
measurements and tests I'm performing (or so I hope):

#!/bin/bash
DEV=eth0
WWWPORT=8000
SMTPPORT=2500
MAPI=10.0.16.10
tc qdisc del dev $DEV root  /dev/null 21

# root qdisc
tc qdisc add dev $DEV handle 1: root htb default 2

# root classes
tc class add dev $DEV classid 1:1 parent 1: htb rate 100kbps
tc class add dev $DEV classid 1:2 parent 1: htb rate 90mbit
tc qdisc add dev $DEV handle 2: parent 1:2 sfq perturb 10

# a/www
tc class add dev $DEV classid 1:10 parent 1:1 htb rate 30kbps ceil 100kbps prio 0
tc qdisc add dev $DEV handle 10: parent 1:10 sfq perturb 10

# a/smtp
tc class add dev $DEV classid 1:11 parent 1:1 htb rate 10kbps ceil 100kbps prio 0
tc qdisc add dev $DEV handle 20: parent 1:11 sfq perturb 10

# b
tc class add dev $DEV classid 1:12 parent 1:1 htb rate 60kbps ceil 100kbps
tc qdisc add dev $DEV handle 30: parent 1:12 sfq perturb 10

# qualquer coisa indo para a mapi8 cai na classe 1:1
tc filter add dev $DEV parent 1:0 prio 10 protocol ip u32 \
match ip dst $MAPI/32 \
flowid 1:1

# on 1:1: a/www - 1:10
tc filter add dev $DEV parent 1:1 prio 5 protocol ip u32 \
match ip dst $MAPI/32 \
match ip protocol 0x06 0xff \
match ip dport $WWWPORT 0x \
flowid 1:10

# on 1:1: a/smtp - 1:11
tc filter add dev $DEV parent 1:1 prio 5 protocol ip u32 \
match ip dst $MAPI/32 \
match ip protocol 0x06 0xff \
match ip dport $SMTPPORT 0x \
flowid 1:11

# on 1:1: b (telnet, for example) - 1:12
tc filter add dev $DEV parent 1:1 prio 5 protocol ip u32 \
match ip dst $MAPI/32 \
match ip protocol 0x06 0xff \
match ip dport 23 0x \
flowid 1:12




rebounce-ann.png
Description: PNG image
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] calculating burst for TBF

2006-02-20 Thread Andreas Hasenack
I'm using tc from iproute-2.6.15 with a 2.6.12 kernel.

I was testing the effects of the burst parameter in a tbf qdisc.
Basically, I was testing this statement from the tc-tbf(8) manpage:

If your buffer is too small, packets may be dropped because more tokens arrive
per timer tick than fit in your bucket.  The minimum buffer size can be
calculated by dividing the rate by HZ.

So, for a 200kbit rate on intel, this would yeld me a minimum burst of 
2000bits, or
250 bytes.

I then do this:
tc qdisc add dev eth0 handle 1: root tbf latency 50ms burst 250b rate 200kbit

but all packets are dropped. I then rise burst to 300b, 400b, even 900b and it
is still not working. It only starts working when I raise it to 2000b. Which,
besides being the wrong unit (bits versus bytes), is the result of the rate/HZ
calculation.

The tc(8) manpage says that b or a bare number = bytes, but it seems this 
parameter
ends up being bits? If not, what is wrong then?

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] tools for traffic monitoring

2003-01-21 Thread Andreas Hasenack
Hi all,

are there any tools (besides ntop) which you guys use to monitor
traffic, service by service?

mrtg is not enough, I want something that can show me traffic on
a service by service basis, and from/to which host. I guess
ntop is quite complete in this area, but is there anything else?

Thanks

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



Re: [LARTC] many ways to do load balancing (or not?)

2002-11-22 Thread Andreas Hasenack
Em Thu, Nov 21, 2002 at 02:20:57PM -0800, William L. Thomson Jr. escreveu:
 Also I do not believe the load balancing is packet based. Usually it's
 more connection based. Meaning that if you request a file, more than
 likely all parts of that file will be transfered using the same route.
 If you request it again, it may take the same route or another.

If I make many connections from one IP (inside) to a web server (outside),
for example (like many simultaneous downloads, or a complex page), I think
they will all go via the same route, because the originating IP and the
destination are the same. It will hit the cache.
Hmm, not good if your users use a proxy, but then again, the proxy would
cache the page probably.

 Now if the request was generated from the inside it would still work
 some what the same. If I send two emails out at once, the first will use
 gw1 and the other will use gw2.

Unless they are sent to the same MTA in the outside, then it will get a
cache hit (supposing the 60s haven't gone by then). Or not?

 All packets for each will travel via the same route and use the same
 gateway from start to finish.

Agreed.

 If it was more on a packet level, the other end would be confused.

Sure. When I said packet count before I was thinking about something
along the lines of real traffic balancing, that is, the router somehow
remembering how many packets it sent to each route and choosing the
less used one.

 It would be getting responses from an IP it was not expecting response
 from. I would imagine each side to send redirects, and all sorts of
 problems. Like it receiving every other packet and dropping the packets
 in between.

And breaking stateful firewalls.

 If during a file transfer the route cache is flushed, there is the
 possibility of the rest of the packets going out a different interface.

Uh oh... It shouldn't be that simple, what about that 60s timeout for
the cache? It's very likely to occur during a file transfer.

 Neither does it perfectly or with intelligent algorithms. Neither allow
 you to use all paths for a single transfer.

Only things like MPPP I guess, for example, or channel bonding, or TQL.

 So if you have two 1.5 mbs connection, you do not end up with a 3.0 mbs
 line. You do have one internal gateway for both, and if one goes down
 the other can be used. So you do have redundancy, and both lines can be
 used to serve difference requests to different places.

So it's more like redundancy/HA with a best effort towards balancing.

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



Re: [LARTC] many ways to do load balancing (or not?)

2002-11-22 Thread Andreas Hasenack
Em Thu, Nov 21, 2002 at 08:55:05PM -0200, Christoph Simon escreveu:
 My understanding is, that for equalize to work, all lines must go to
 the same point and that must not be the end point. Also, this same
 point must implement the equalize very much the same way.

What is it that you call a point here (destination)? The same ISP? The
same network?

I understand that it should be the same ISP because of egress filtering, that
is, one ISP should block packets with a source address that doesn't belong
to the ISP supplying the link.

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



Re: [LARTC] many ways to do load balancing (or not?)

2002-11-22 Thread Andreas Hasenack
Em Thu, Nov 21, 2002 at 04:24:06PM -0800, William L. Thomson Jr. escreveu:
 But I have been informed I believe by Julian and others that the load
 balancing, multipath equalize feature can be used even without NAT but
 in a different situation that mine?

I'm confused as well. Suppose you have two links to the internet, a DMZ,
and an internal network, SNAT'ed. Suppose you have a public web server
in the DMZ (the DMZ is not SNAT'ed).

How would multipath route (with or without equalize) help here? I mean,
it would only really help if there were connections starting from
the inside (DMZ or SNAT'ed network) to the outside. But:

- the internal network would probably do many downloads, and not uploads

- the web server doesn't originate traffic, it responds to requests from
the outside world, and it will respond using the same link the request
came in (or not?)

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



Re: [LARTC] many ways to do load balancing (or not?)

2002-11-22 Thread Andreas Hasenack
Em Fri, Nov 22, 2002 at 10:05:25AM -0800, William L. Thomson Jr. escreveu:
  So it's more like redundancy/HA with a best effort towards balancing.
 
 Yes, or in other terms. My need was a single gateway for my servers
 although I have two ISPs. The amount of load balancing you get it about
 the same as the amount of redundancy. You get a partial solution to
 both, but not a complete solution.

I just found this patch, has anybody already played with it?

ftp://sliepen.warande.net/pub/eql/patch-2.4.18-2.gz

Excerpt:

Load balancing needed a slight adjustment to the unpatched linux kernel,
because of the route cache. Multipath is an option already found in the old
2.1.x kernels. However, once a packet arrives, and it matches a multipath
route, a (quasi random) device out of the list of nexthops is taken for its
destination. That's okay, but after that the kernel puts everything into a
hash table, and the next time a packet with the same source/dest/tos arrives,
it finds it is in the hash table, and routes it via the same device as last
time. The adjustment I made is as follows: If the kernel sees that the route
to be taken has got the 'equalize' flag set, it not only selects the random
device, but also tags the packet with the RTCF_EQUALIZE flag. If another
packet of the same kind arrives, it is looked up in the hash table. It then
checks if our flag is set, and if so, it deletes the entry in the cache and
has to recalculate the destination again.

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/