Re: [LARTC] Bandwidth Monitor

2003-08-01 Thread Leonardo Balliache
Hi,

Have a look at 
http://ldp.kernelnotes.de/HOWTO/Querying-libiptc-HOWTO/bmeter.html

Best regards,

Leonardo Balliache

At 01:50 p.m. 01/08/03 -0300, you wrote:
Anybody knows about one bandwidth meter to use in Bering.
This is a script i built, it's works wel, but it's not very nice!!!  =P
#!/bin/bash

# Bandwidth Monitor

device=eth0

bytes=`grep $device /proc/net/dev |  cut -f 2 -d : | cut -d ' ' -f 2`
kbytes=`expr $bytes / 1024`
actual=$kbytes
i=1
x=0
total=0
while [ $i -le 2 ]
do
x=`expr $x + 1`
bytes=`grep $device /proc/net/dev |  cut -f 2 -d : | cut -d ' ' -f 2`
kbytes=`expr $bytes / 1024`
valor=`expr $kbytes - $actual`
actual=$kbytes
if [ $x =  5 ]
then
x=2
total=$promedio
fi
total=`expr $total + $valor`
promedio=`expr $total / $x`
echo actual: [$valor Kb/s] - promedio: [$promedio Kb/s]
sleep 1
done

Best regards.

Sebastián A. Aresca

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Practical QoS
http://opalsoft.net/qos
___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] Good News! Good News!

2003-07-21 Thread Leonardo Balliache
Hi, Chijioke,

Do you have a link?

Best regards,

Leonardo Balliache

At 07:49 a.m. 20/07/03 -0700, you wrote:
Lo All,

Linux is Catching up with Cisco in VoIP traffic management.

Alex Clouter of LARTC, just submitted a new QoS management tool using ESFQ,
for beta-testing, so far its working well, I'll keep everyone posted, this 
week its going to be gruelled hard, keep ya fingers crossed, ya'll come 
back now, hear!

Thanks all, and we're looking for more beta-testers.

Kalu

_
MSN 8 with e-mail virus protection service: 2 months FREE*
http://join.msn.com/?page=features/virus
___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Practical QoS
http://opalsoft.net/qos
___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] OUTPUT chain marking after or before routing?

2003-07-20 Thread Leonardo Balliache
Hi,

At 08:04 a.m. 18/07/03 +0300, you wrote:

- Original Message -
From: "Martin A. Brown" <[EMAIL PROTECTED]>
To: "Chijioke Kalu" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, July 17, 2003 6:55 PM
Subject: Re: [LARTC] OUTPUT chain marking after or before routing?
> Catalin,
>
> >When I try to connect to a smtp port somewhere in the Internet, tcpdump
show
> >me that these packets go to the eth2 interface (the main table default
> >route). I don't know where is my mistake but it seems that the marking in
> >the OUTPUT chain occurs AFTER and not BEFORE routing. Is this a correct
> >behaviour? How can I solve my problem? Please help!
>
> According to my reading of the KPTD (and my understanding), packets
> generated on the local machine have already been routed by the time the
> OUTPUT chain is traversed.  See:
>
>   http://www.docum.org/stef.coene/qos/kptd/
>
I'm very confused now. Look what is written in the iptables man page:

#
 mangle This  table  is used for specialized packet alteration.  It has two
built-in
  chains: PREROUTING (for altering incoming packets before
routing) and OUTPUT
  (for altering locally-generated packets before routing).
##
So how it is? OUTPUT marks packets AFTER or BEFORE routing?
Just before "output routing". OUTPUT is for locally generated packets. 
These packets are also to be routed (output routing). OUTPUT mangle marks 
"locally generated" packets just before they are "output routing".

Perhaps confussion is because also input routing exists where a decision is 
taken: is this packet for this host or it has just to be forwarded? Read 
Stef´s remarks on the diagram:

Output routing : the local process selects a source address and a route. 
This route is attached to the packet and used later.

Best regards,

Leonardo Balliache

Practical QoS
http://opalsoft.net/qos
___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] hardware requirements

2003-07-12 Thread Leonardo Balliache
Hi, Nickola:

At 02:44 p.m. 11/07/03 +0300, you wrote:

The only problem I'm experiencing is with heavy udp traffic, f.e. Counter 
Strike
and Bnetd (Diablo, DiabloII, {Star|War}Craft) game play. Then the machine 
is reaching
some really high load averages, like 8 to 10. Have no idea how this
could be avoided.

I'd appreciate any suggestions.
UDP traffic is very difficult to control because the protocol is 
unresponsive. Then the only way to put it under control is by killing its 
packets. This doesn´t mean that it is going to lowering its rate, just that 
you kill all packets that violate an upper level.

You could do that at ingress using something like this:

tc qdisc add dev eth0 handle : ingress

tc filter add dev eth0 parent : protocol ip prio 1 u32 \
   match ip protocol 17 0xff police rate 1200kbit \
   burst 75kb drop flowid :1
Here UDP traffic is policed to 1200kbit.

Then using tcindex you can filter them again when leaving to select an 
output class for these "bad citizen" flows.

UDP transports RTP that is always a problem. Application flows travelling 
on it are very sensitive to latency and jitter. Some multimedia protocols, 
like MPEG,  are also very sensitive to packet dropping because when you 
lose an I-decode frame you lose too a GOP (group of pictures); really a 
problem.

Using a policer you at least guarantee yourself they are not going to 
starve other (TCP good citizen) flows and your servers. But packet masacre 
generates then quality problem to the multimedia applications. Perhaps the 
best solution is overprovisioning. Check how many flows of this type you 
have on peak hours, check the bandwidth requirement of each of them, and 
supercharge your servers to support the storm.

RED, or GRED even better, can help too. This case the control is less 
agressive and, perhaps, things go better. It´s just a matter to have time 
and patient to make some tests.

You can search for more information on my site, below.

Best regards,

Leonardo Balliache



Practical QoS
http://opalsoft.net/qos
___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] VoIP using Linux

2003-07-09 Thread Leonardo Balliache
Hi, Chijioke:

I have written something about VoIP using Linux that can be of interest for 
you and any other interested in VoIP packet forwarding under Linux.

Have a look at http://opalsoft.net/qos/VoIP.htm

Best regards,

Leonardo Balliache



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


RE: [LARTC] Question on prio qdisc

2003-07-08 Thread Leonardo Balliache
Hi, Joseph:

At 03:31 p.m. 08/07/03 -0400, you wrote:

I understand your point, but I have tried to determine
the answers to a couple of questions from the code
without success (I don't understand the source code
well enough). Maybe you could answer these:
1) If I specify a queue length for the interface
by configuring "txqueuelen" to some number (say 1000), then
does that mean that the 3 FIFO queues for the 3 bands
in either "prio" scheduler or the default "pfifo_fast" scheduler are set each
to a size of 1000 packets? If this is the case, then I don't
see how my high priority queue can drop packets when the high
priority flow rate is well below what the link can support.
Can you think of any reason this should happen?
Have a look to a previous e-mail I answered to Lars. Reason? What´s the 
entering rate? What's the departing rate? The queue will be filled to a 
rate defined by the difference between those rates.

2) My testing leads me to believe that there are 3 queues,
but the buffer allocated for a new packet enqueued to
any of the queues is allocated from a common pool of buffers
that might be of size equal to "txqueuelen" = 1000. Is it
possible that it is implemented this way? I could see
high priority packets being dropped if this was the
implementation?
**
No, again, have a look to a previous e-mail I sent to Lars.

Best regards,

Leonardo Balliache

Here you have a copy of Lars' e-mail:

I'm not very sure if the total queue length is 100 or 300 because I was
searching the code and I can't find any information that tell me what the
real length is.
Because "ip link" shows me that qlen is 100, I have to suppose that total
length is 100.
But anyway, PRIO queing discipline principle calls that each priority queue
is independent, then we can't have the prio 0 queue full of prio 1 or 2
packets (being the filters working well). It goes against the PRIO queuing
discipline principle.
In "Differentiated Service on Linux HOWTO" (work in progress) I present
a PRIO queuing discipline explanation using some documentation taken
from Juniper Networks. Have a look at http://opalsoft.net/qos.
Also if you read about Cisco PQ the principle is identical. If we don't
respect the principle, then the queue can be any sort of queue, but
certainly, not a PRIO queue. Then it´s not posible to ask on PRIO for something
that drops an already enqueue packet to make run for a new arriving one.
I suggest that configurations and tests made to reach that conclusions were 
revised.



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] HTB burstable for 2 interface , how ?

2003-07-08 Thread Leonardo Balliache
Hi, Don:

At 08:34 p.m. 07/07/03 -0700, you wrote:

The point is that if X sends 1024 and Y tries to send 1 but fails,
the firewall cannot tell and therefore can't do anything about it.
Which firewall? In case of having it doesn't have anything to do
here. Let's put aside any coercive device to concentrate in TCP
behavior.
I'm not nearly as confident as you in the tcp mechanisms.  First,
the upstream routers might favor one flow over another for whatever
reason.  Second, even if tcp works as you'd like, the convergence will
be much faster if there's some unused bandwidth.
Perhaps you are right, perhaps you are not. Who knows? TCP behavior
is a funny game with very clear rules. One of them is: first one that
loses a packet has to cut its rate by half. Who's lucky? Do you know?
Packets are dropped by routers in a random selection process. TCP
flows are always fighting to increase their shares. Sometime one of
them is ahead, but, it loses a packet. Like Monopoly. Walk back seven
step, man!! you are castigated.
But I think that things really are stacked against the new flow.
Why? Are the routers conspiring against it?

First the old one is going fast, and if it happens to lose a packet
to the new one, it will probably barely notice, just receiving a sack
and not slowing down.
** to the new one **

I going to understand ** to the old one **. Is not as easy. In fact TCP
congestion algorithms are far away for being easy. Simplifying, the game
is: if you lose a packet you have to cut your sending window by a half.
Check your new congestion window, check your outstanding packets, check
your peer advertising window, and depending of this, calculate how many
packets you can send, if you can send any, at all.
However, in the reverse case, the new flow is
starting slow, and if it loses a packet it waits for a timeout to send
another one, then if it loses another it waits a longer timeout.
Eventually, when the new flow tries to speed up it's always bumping
against the old one, causing it to slow down again.
Probably the first flow has a higher probability to lose a packet because
it has more packets flowing out there. And if it happens, it has to cut
its window by a half. At this moment the second flow is at slow-start,
its window is increasing almost geometricaly (it doubles each time it
receives an ack). Again, who knows? The fight is fighting.
Timeout is an extreme case. Probably a duplicate ack advertises the flow
that one packet has been lost. Then slow-start is not the answer, just
fast retransmit and fast recovery and then congestion avoidance.
What about if the routers are RED routers. RED routers conspire (they
if fact are designed to conspire!!) against stronger flows, those that
maintain more packets in the router queue. Then the first flow has to
walk with care. Someone out there is tring to kill one of its packets.
Are you really sure the system is guaranteed to converge to 50-50, or
even to converge at all?  I'm not.  Please convince me.
In any case, the firewall here is clearly not contributing to this
since it's not dropping any packets, or even slowing them down.
Again any firewall doesn't matter.

I did some work to try to convince you. Have a look at
http://opalsoft.net/qos/TCPvsTCP.htm.
Finally, to close this pleasing conversation, don't forget that when
you configure your "controller devices" you are not controlling TCP,
you are just conspiring against it by killing some packets and using
the simple rule, "if you lose a packet, cut your window", to put under
control the average throughput.
Best regards,

Leonardo Balliache



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


RE: [LARTC] Question on prio qdisc

2003-07-08 Thread Leonardo Balliache
Hi, Lars:

At 10:10 a.m. 08/07/03 +0200, you wrote:

>
> each queue is an independent queue. Then if high priority packets are
> dropped is because the high priority queue has overflow, not because some
> (unique?) queue is full from low priority packets.
I am little confused.  If typed ifconfig, we can see "txqueuelen:100".
Will this imply that all three bands hold by prio are sharing this 100
packages capacity, or can each of these three bands each occupy 100
packages in its buffer.  In the latter, this means prio can maximum occupy
300 packages...
This problem will also imply HTB and other schedulers..

Regards

Lars Landmark
I'm not very sure if the total queue length is 100 or 300 because I was
searching the code and I can't find any information that tell me what the
real length is.
Because "ip link" shows me that qlen is 100, I have to suppose that total
length is 100.
But anyway, PRIO queing discipline principle calls that each priority queue
is independent, then we can't have the prio 0 queue full of prio 1 or 2
packets (being the filters working well). It goes against the PRIO queuing
discipline principle.
In "Differentiated Service on Linux HOWTO" (work in progress) I present
a PRIO queuing discipline explanation using some documentation taken
from Juniper Networks. Have a look at http://opalsoft.net/qos.
Also if you read about Cisco PQ the principle is identical. If we don't
respect the principle, then the queue can be any sort of queue, but
certainly, not a PRIO queue. Then it´s not posible to ask on PRIO for something
that drops an already enqueue packet to make run for a new arriving one.
I suggest that configurations and tests made to reach that conclusions were 
revised.

Best regards,

Leonardo Balliache





___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] Re: netfilter resets TCP conversation that was DNATed from thelocal machine to another

2003-07-07 Thread Leonardo Balliache
Hi. Michael:

At 06:45 a.m. 03/07/03 +0200, you wrote:


The netfilter list had no answer for this.
I don´t thing netfilter has something to do with this. Remember that:

TCP faces transmission errors such as received datagrams apparently not 
directed to the host that received them by responding, inmediately, with a 
datagram that has its RST flag on.

Search for this side, perhaps you were lucky.

Best regards,

Leonardo Balliache



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


RE: [LARTC] Question on prio qdisc

2003-07-07 Thread Leonardo Balliache
Hi, Joseph:

At 06:45 a.m. 03/07/03 +0200, you wrote:

Thank you Lars and Wing for your responses.

I just ran another experiment using RED on the
lowest priority flow as suggested by Wing. I had
used RED before (on all flows) without success.
In this experiment, specified a large queue size for the interface.
In this case the result was roughly the same.
I overloaded the interface with 9.5 Mb/s of
flow with 8 Mb/s at the lowest priority, 1 Mb/s
at the medium priority, and 0.5 Mb/s at the highest
priority hoping that RED would discard lowest
priority packets to make room for the higher priority
packets. The result was that all flows suffered
approximately the same packet loss rate (about 45%).
The interface was an 802.11b interface at 11 Mb/s
with the two nodes very close to each other so that
link quality was very high. Apparently RED does not
discard the lowest priority packets that are overloading
the interface. The following script was
used:
RED doesn´t have any way to know which are lower, medium or higher priority 
packets. To do what you want you need to use GRED.

My next step will be to modify the enqueue function to call
prio_drop() if there is not enough room to enqueue a new
packet as you have both suggested. I will have to get some help
for this from some of our engineers that can understand the code
better than I could.
The enqueue code is here:

static int
prio_enqueue(struct sk_buff *skb, struct Qdisc* sch)
{
   struct prio_sched_data *q = (struct prio_sched_data *)sch->data;
   struct Qdisc *qdisc;
   int ret;
   qdisc = q->queues[prio_classify(skb, sch)];

   if ((ret = qdisc->enqueue(skb, qdisc)) == 0) {
   sch->stats.bytes += skb->len;
   sch->stats.packets++;
   sch->q.qlen++;
   return 0;
}
sch->stats.drops++;
return ret;
}
As you see in:

qdisc = q->queues[prio_classify(skb, sch)];

each queue is an independent queue. Then if high priority packets are 
dropped is because the high priority queue has overflow, not because some 
(unique?) queue is full from low priority packets.

Also, in drop-tail queues, as its name imply and prio is, never a already 
enqueue packet is dropped to make room for a new one. Packets are dropped 
from the tail.

Best regards,

Leonardo Balliache



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] HTB doesn't respect rate values

2003-07-06 Thread Leonardo Balliache
At 01:05 a.m. 06/07/03 +0200, you wrote:

Hi, Sergiusz:

I make a test:
I send an email - it goes to default class 1:3. Then (during email is
sent) I get e big file through www. What happen? WWW rate is 30-70kbit.
So it doesn't keep his guaranted rate 122kbit. It lends his rate for
SMTP. When SMTP stops sending his packets, WWW gets 100%.
If your HTB configuration is working well (HTB works really very well) you 
have to wait some time that TCP flows get stable. How long your test last? 
How heavy is your e-mail? Did you leave enough time to www to reclaim its 
rights? How strong is your www flow? Are you measuring average or 
instantaneous rates?

Best regards,

Leonardo Balliache



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] HTB burstable for 2 interface , how ?

2003-07-06 Thread Leonardo Balliache
Hi, Don:

 > INTERNET
 > |
 > |eth0 202=2E14=2E41=2E1
 > BW=2EManager
 > | |
 > | +eth1192=2E168=2E1=2E0/24
 > |
 > +--eth2192=2E168=2E2=2E0/24
 >
 > Total incoming bandwidth to eth0 is 1024kbps
 > should be shared to eth1 and eth2, which mean each get 512Kbps and
 > burstable to 1024Kbps if other host is idle=2E
> This doesn't make sense to me.
> The fact that an internal host is idle does not justify not sending
> traffic TO it.
>
> The suggestions to use IMQ+HTB seem to miss the problem that
> if someone sends 1024 to eth1 then nobody has a chance to even
> begine to send anything to eth2.
Why not?

Ignore those bandwidth controller devices for a moment.

TCP flows compete to cope the available bandwidth. If your link capacity is 
1024kbit and you start sending 1024kbit to the interface eth1 and sometime 
later (the time you want) begin to send 1024 kbit to interface eth2, you 
can be truly sure that when flows stabilize each of them will be try to 
cope 50% of the available bandwidth defined by the link capacity. Finally 
each flow will be 512kbit.

Now put on your controller devices. They can control the upper level of the 
flows but no the flows themselve. TCP is always testing the bandwidth 
availability trying to get more share from it. What stop this? When a 
packet is dropped the inner congestion control mechanism is fired and TCP 
reduce automatically its rate. Finally the fight converge to every flow 
sharing the upper level defined by the controller device, as soon as those 
flows have enough packets to reclaim it.

Conclusion: use your bandwidth controllers but never forget the TCP natural 
behavior.

Best regards,

Leonardo Balliache



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] Linux policing

2003-06-21 Thread Leonardo Balliache
At 09:14 a.m. 20/06/03 +0100, you wrote:

Hi Andrew,

I'm not sure if I understand what your answer is. Policing is done at 
ingress. If you are talking about the process of setting the DSCP it is 
done at egress using DSMARK. I don´t know (talk with Patrick or Stef) if by 
using IMQ (as I understand it is some kind of virtual interface, not sure, 
either) you could install a DSMARK on this interface to mark packets when 
entering the router. Really don´t know. Again I´m not sure if I am 
understanding the sense of your answer.

If you are not hurry do not hesitate to contact me again, but I'm always 
late with my e-mail replies.

Best regards,

Leonardo Balliache


cheers.
Having had another look at the kernel, and at the lartc howto it seems that
tc filter when policing may be able to reclassify out of profile traffic to
BE, but no more than this, without first putting traffic into a queue.
Andrew

-Original Message-----
From: Leonardo Balliache [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 18, 2003 7:39 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: LARTC digest, Vol 1 #1233 - 16 msgs
Andrew:

Differentiated Service on Linux HOWTO (work in progress) could be of some
help for you.
Have a look at http://opalsoft.net/qos

Best regards,

Leonardo Balliache

>Message: 6
>From: "Burnside, Andrew" <[EMAIL PROTECTED]>
>To: "'[EMAIL PROTECTED]'" <[EMAIL PROTECTED]>
>Date: Wed, 18 Jun 2003 13:24:44 +0100
>Subject: [LARTC] DiffServ Marking
>
>I am trying to compare the behaviour of the Linux DiffServ implementation
>with that of Cisco, in DSCP remarking for traffic policing.
>As I understand it, the DSCP is marked at the egress interface (parent
>queue), based on the class that packets are in.
>
>I am looking at what happens at an inter-AS boundary.
>DSCP marked traffic coming into the Edge Router need to be policed and
>remarked:
>e.g. EF traffic up to 2Mbps marked as EF
>EF traffic beyond 2Mbps should be policed and remarked as BE.
>Is there a way to do this remarking before the traffic is segregated into
>egress queues?
>
>Cheers
>
>Andrew


___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] Re: LARTC digest, Vol 1 #1233 - 16 msgs

2003-06-18 Thread Leonardo Balliache
Andrew:

Differentiated Service on Linux HOWTO (work in progress) could be of some 
help for you.

Have a look at http://opalsoft.net/qos

Best regards,

Leonardo Balliache

Message: 6
From: "Burnside, Andrew" <[EMAIL PROTECTED]>
To: "'[EMAIL PROTECTED]'" <[EMAIL PROTECTED]>
Date: Wed, 18 Jun 2003 13:24:44 +0100
Subject: [LARTC] DiffServ Marking
I am trying to compare the behaviour of the Linux DiffServ implementation
with that of Cisco, in DSCP remarking for traffic policing.
As I understand it, the DSCP is marked at the egress interface (parent
queue), based on the class that packets are in.
I am looking at what happens at an inter-AS boundary.
DSCP marked traffic coming into the Edge Router need to be policed and
remarked:
e.g. EF traffic up to 2Mbps marked as EF
EF traffic beyond 2Mbps should be policed and remarked as BE.
Is there a way to do this remarking before the traffic is segregated into
egress queues?
Cheers

Andrew


___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] Differentiated Service on Linux HOWTO

2003-06-14 Thread Leonardo Balliache
Hi guys,

I understand it's a little early to talk about this, but, I'm writing a new 
document for The Linux Document Project site, called "Differentiated 
Service on Linux HOWTO". A link to the work in progress is on my site at 
http://opalsoft.net/qos.

If you have some little time have a look to this; any feedback, comment or 
criticism is welcome.

Best regards,

Leonardo Balliache

___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] A beginner

2002-10-13 Thread Leonardo Balliache

Hi,

Have a look to: http://qos.ittc.ukans.edu/howto/index.html

Best regards,

Leonardo Balliache

At 02:40 p.m. 11/10/02 +0200, you wrote:
>Message: 9
>From: "Emmanuel SIMON" <[EMAIL PROTECTED]>
>To: <[EMAIL PROTECTED]>
>Date: Fri, 11 Oct 2002 14:39:00 +0200
>Subject: [LARTC] A beginner
>
>Hi all,
>
>I begin to work on QoS with Linux.
>So, I am very interested with LARTC.
>I have allready read the LARTC HOWTO and the QoS Connection Tuning HOWTO.
>I am looking for a doc that would be more theoritical than the howtos.
>Especially, I am looking for the definitions of classes, queues, filters and
>so on...
>
>Can someone send me URLs like that or titles of books, please.
>
>Thank you
>Emmanuel
>
>PS: sorry for my poor English
>
>



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



[LARTC] Netfilter API -Kylix

2002-09-22 Thread Leonardo Balliache

Hi,

I wrote http://tldp.org/HOWTO/Querying-libiptc-HOWTO/ published by The 
Linux Document Project.

It´s very simple C program. Hoping this can help you.

Best regards,

Leonardo Balliache



At 06:29 a.m. 21/09/02 +0200, you wrote:
>Message: 1
>Date: Fri, 20 Sep 2002 13:53:11 -0700 (PDT)
>From: Reza Alavi <[EMAIL PROTECTED]>
>To: [EMAIL PROTECTED]
>Subject: [LARTC] Netfilter API -Kylix
>
>Dear Friends,
>Does anyone know any kylix source which use netfilter
>API?
>what about some "simple" C examples?
>(I have seen libiptq man page ;) )
>
>I want to write a program (with kylix if it is
>possible)to monitor the traffic of an IP address and
>whenever its credit is over (which will be calculated
>againts the traffic) simply reject any traffic to/from
>that IP, any idea or clue ?
>
>Thanks in advance.



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



Re: [LARTC] latency simulation

2002-09-19 Thread Leonardo Balliache

Hi Hannes,

You are right. Thank to Stef for the information. I´ll take a copy too.

Best regards,

Leonardo Balliache

At 06:35 p.m. 18/09/02 +0200, you wrote:
>Message: 5
>Date: Wed, 18 Sep 2002 10:04:36 +0200
>From: Hannes Ebner <[EMAIL PROTECTED]>
>To: [EMAIL PROTECTED]
>Subject: Re: [LARTC] latency simulation
>
>Hi Leonardo,
>
>  > Have a look to tbf. It has a selectable latency parameter that adjust
>  >  the queue length. http://lartc.org/lartc.txt paragraph 9.2.2. Token
>  > Bucket Filter can give you more information.
>
>from the HOWTO: "...the latency parameter, which specifies the maximum
>amount of time a packet can sit in the TBF...".
>
>_Can_ sit in the TBF, not _will_ sit. Therefore it is also possible to
>get the packet forwarded with a latency smaller than the one I have
>given to the TBF - but I need a method to decrease it to a fixed amount.
>
>Regards,
>Hannes



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



[LARTC] Re: LARTC digest, Vol 1 #767 - 15 msgs

2002-09-17 Thread Leonardo Balliache

Hi,

Have a look to tbf. It has a selectable latency parameter that adjust the 
queue length. http://lartc.org/lartc.txt paragraph 9.2.2. Token Bucket 
Filter can give you more information.

Best regards,

Leonardo Balliache


At 06:45 p.m. 17/09/02 +0200, you wrote:
>Message: 12
>Date: Tue, 17 Sep 2002 16:19:12 +0200
>From: Hannes Ebner <[EMAIL PROTECTED]>
>To: [EMAIL PROTECTED]
>Subject: [LARTC] latency simulation
>
>Hi,
>
>how could I achieve a simulation of Bandwidth _and_ Latency?
>
>In detail, I need to simulate the characteristics of ISDN and DSL on a
>simple Ethernet-Connection:
>
> ISDN 64 Kbit/s und 10 ms latency
> DSL 128 Kbit/s und 50 ms latency
>
>I would use HTB for the limitation of the Bandwidth, but how am I able
>to increase the latency?
>
>Thank you in advance,
>Hannes



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



[LARTC] Re: LARTC digest, Vol 1 #766 - 9 msgs

2002-09-17 Thread Leonardo Balliache

Hi,

In a busy network you need to use prio. Check how many channels of voice 
you do need and bandwidth required by each channel (16-32 kbits?). Put 
voice (udp packets) in the higher priority prio queue and use tbf behind it 
to limit traffic via prio to the maximum required for your channels. Or use 
htp putting your udp traffic in the higher priority class.

Be careful that higher prio traffic can't be more than 20-30% of the link 
capacity to avoid starvation of lower prio traffic.

Don´t expect good quality in a busy network. When congestion appears your 
udp packets will be dropped with no mercy (udp is an unresponssive 
protocol) hurting quality.

Best regards,

Leonardo Balliache

Ps: Cisco has (check your IOS version) RTP header compression that can help 
a lot to lowering bandwidth requirements. You have to enable it at both 
sides of the link. Also, why don't try directly with Cisco PQ?

At 06:28 a.m. 17/09/02 +0200, you wrote:
>From: Andreani Luca <[EMAIL PROTECTED]>
>To: "LARTC mailing list (E-mail)" <[EMAIL PROTECTED]>
>Date: Mon, 16 Sep 2002 11:49:28 +0200
>Subject: [LARTC] Voip and Qos tests
>
>Hello list,
>
>I'm performing some tests in order to evaluate the possibility to make Voip
>calls on a busy network.
>I use some cisco routers with wfq enabled. I want to introduce some linux
>boxes acting as routers.
>I am especially interested in low-speed (64-128 kbps) links with PPP and
>frame-relay.
>
>Haw you some reference about similar tests?
>
>Whath qdisc (from tc) has best performance in these situations?
>
>Thank's
>
>Luca



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



[LARTC] Kernel Packet Traveling Diagram

2002-09-11 Thread Leonardo Balliache

Hi,

The diagram is not completed. If you check previous messages in this list 
you will see that mangle INPUT, FORWARD and POSTROUTING are not included 
yet. It's my responsability to update the diagram. I'm going to do it as 
soon as possible.

Mangle is always before nat, have a look to 
http://www.netfilter.org/documentation/HOWTO//netfilter-hacking-HOWTO.txt

Best regards,

Leonardo Balliache.


___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



[LARTC] linux NETWORKING diagram ?

2002-07-24 Thread Leonardo Balliache

Hi.

You wrote:

 > OK. I've made the diagram in Dia (attached). U can easly export it in 
any format > (any linux distro have Dia, i think)...

Could you post your diagram in ascii?

 > I made some changes may be they are wrong, pls correct me... I thought 
that all > "mangle" and "nat" stuff should be "IPTABLES", and all 
"ipchains" suff should go > away !! ?

As I understand there is some ipchains original code running on iptables; 
but not iptables code to emulate ipchains behavior, just ipchains filter 
code. Have a look to LARTC digest, Vol 1 #641, #642, #651, #652 and #655.

 > - What is "mark-rewrite" ?!

It would be better "mark-write".

It's mark packet writing in mangle table (fwmark). This mark survive just 
inside host or router that makes the mark (the packet is not really marked 
when it leaves the router).

 > - there is something which have to be added somewhere around 
"forwarding", can't > figure out what/where ?! Pls do it, or tell me i wil 
do it?
 > - in my opinion (6a.3) output-routing should go at the place of (7) !!!
 > 'm I right ?

I haven't your diagram. BTW, diagram misses some hooks (must be added):

conntrack PREROUTING just before mangle PREROUTING.
conntrack INPUT after filter INPUT but before LOCAL PROCESS.
mangle INPUT after conntrack INPUT also before LOCAL PROCESS.
mangle FORWARD before filter FORWARD.
conntrack OUTPUT before mangle OUTPUT but after OUTPUT ROUTING.
mangle POSTROUTING before nat POSTROUTING.
conntrack POSTROUTING after nat POSTROUTING but before QOS EGRESS.

when discussion matures I will post this changes to Stef.

Best regards,

Leonardo Balliache


___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



[LARTC] Attacing a RED to a CBQ

2002-07-22 Thread Leonardo Balliache

Hi Jan:

Perhaps this could help you.

Best regards,

Leonardo Balliache

Pd: haven't never implemented one of this monsters.

NAME
red - Random Early Detection

SYNOPSIS
tc  qdisc  ...  red  limit bytes min bytes max bytes
avpkt bytes burst packets [ ecn ] [ bandwidth rate ]
probability chance

DESCRIPTION
Random  Early  Detection is a classless qdisc which limits
its queue size smartly. Regular queues simply drop packets
from  the  tail  when  they are full, which may not be the
optimal behaviour. RED also performs tail drop,  but  does
so in a more gradual way.

Once  the  queue  hits  a  certain average length, packets
enqueued have a configurable chance of being marked (which
may  mean dropped). This chance increases linearly up to a
point called the max average queue  length,  although  the
queue might get bigger.

This  has  a  host of benefits over simple taildrop, while
not being processor  intensive.  It  prevents  synchronous
retransmits  after a burst in traffic, which cause further
retransmits, etc.

The goal is the have a small queue size, which is good for
interactivity while not disturbing TCP/IP traffic with too
many sudden drops after a burst of traffic.

Depending on wether  ECN  is  configured,  marking  either
means dropping or purely marking a packet as overlimit.

ALGORITHM
The average queue size is used for determining the marking
probability.  This  is  calculated  using  an  Exponential
Weighted  Moving Average, which can be more or less sensi-
tive to bursts.

When the average queue size is below min bytes, no  packet
will  ever be marked. When it exceeds min, the probability
of doing so climbs linearly up to probability,  until  the
average  queue size hits max bytes. Because probability is
normally not set to 100%, the queue size might conceivably
rise  above  max bytes, so the limit parameter is provided
to set a hard maximum for the size of the queue.

PARAMETERS
minAverage queue size at which marking becomes a  pos-
   sibility.

maxAt this average queue size, the marking probability
   is maximal. Should be at least twice min to prevent
   synchronous retransmits, higher for low min.

probability
   Maximum  probability  for  marking,  specified as a
   floating point number from 0.0  to  1.0.  Suggested
   values are 0.01 or 0.02 (1 or 2%, respectively).

limit  Hard  limit on the real (not average) queue size in
   bytes. Further packets are dropped. Should  be  set
   higher  than max+burst. It is advised to set this a
   few times higher than max.

burst  Used for determining how  fast  the  average  queue
   size  is  influenced by the real queue size. Larger
   values make the calculation more sluggish, allowing
   longer  bursts  of  traffic  before marking starts.
   Real life experiments support the following  guide-
   line: (min+min+max)/(3*avpkt).

avpkt  Specified  in  bytes.  Used with burst to determine
   the time constant for average queue  size  calcula-
   tions. 1000 is a good value.

bandwidth
   This rate is used for calculating the average queue
   size after some idle time. Should  be  set  to  the
   bandwidth of your interface. Does not mean that RED
   will shape for you! Optional.

ecnAs mentioned  before,  RED  can  either  'mark'  or
   'drop'. Explicit Congestion Notification allows RED
   to notify remote hosts that their rate exceeds  the
   amount  of  bandwidth  available.  Non-ECN  capable
   hosts can only be notified by  dropping  a  packet.
   If this parameter is specified, packets which indi-
   cate that their hosts honor ECN will only be marked
   and  not  dropped, unless the queue size hits limit
   bytes. Needs a tc binary with RED support  compiled
   in. Recommended.

SOURCES
o  Floyd, S., and Jacobson, V., Random Early Detection
   gateways  for   Congestion   Avoidance.
   http://www.aciri.org/floyd/papers/red/red.html
o  Some   changes   to  the  algorithm  by  Alexey  N.
   Kuznetsov.
SEE ALSO
tc-cbq(8), tc-htb(8), tc-red(8), tc-tbf(8), tc-pfifo(8),
tc-bfifo(8), tc-pfifo_fast(8), tc-filters(8)



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



Re: [LARTC] kernel packet traveling diagram

2002-06-30 Thread Leonardo Balliache
acket has to be forwarded to another device,
the function net/ipv4/ip_forward.c:ip_forward() is called.

The first task of this function is to check the ip header's TTL. If it
is <= 1 we drop the packet and return an ICMP time exceeded message to the
sender.

We check the header's tailroom if we have enough tailroom for the destination
device's link layer header and expand the skb if neccessary.

Next the TTL is decremented by one.

If our new packet is bigger than the MTU of the destination device and the
don't fragment bit in the IP header is set, we drop the packet and send a
ICMP frag needed message to the sender.

Finally it is time to call another one of the netfilter hooks - this time it
is the NF_IP_FORWARD hook.

Assuming that the netfilter hooks is returning a NF_ACCEPT verdict, the
function net/ipv4/ip_forward.c:ip_forward_finish() is the next step in our
packet's journey.

ip_forward_finish() itself checks if we need to set any additional options in
the IP header, and has ip_opt *FIXME* doing this. Afterwards it calls
include/net/ip.h:ip_send().

If we need some fragmentation, *FIXME*:ip_fragment gets called, otherwise we
continue in net/ipv4/ip_forward:ip_finish_output().

ip_finish_output() again does nothing else than calling the netfilter
postrouting hook NF_IP_POST_ROUTING and calling ip_finish_output2() on
successful traversal of this hook.

ip_finish_output2() calls prepends the hardware (link layer) header to our
skb and calls net/ipv4/ip_output.c:ip_output().
-

*FIXME* are actually placed in Harald document.

Ok, as I understand the second IMQ hook must be after the netfilter
postrouting hook NF_IP_POST_ROUTING but before calling the link layer
function ip_output in ip_output.c.

   |
   +---+--+
   | nat  |
   | POSTROUTING  | SOURCE REWRITE
   +---+--+
   |

is IMQ probably here ??

   |
   +---+--+
   | QOS  |
   |EGRESS| <- controlled by tc
   +---+--+
   |
---+---
    Network

I'm not sure again. Perhaps if Patrick is reading this can help a little.

Best regards,

Leonardo Balliache

PS: thank a lot for uploading the diagram in your site.


___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



Re: [LARTC] kernel packet traveling diagram

2002-06-30 Thread Leonardo Balliache

Hi, Jan.

You wrote:

 > Are you sure? In the previous diagram, the PRDB was checked before the
 > packet hits the QOS Ingress. If the PRDB indeed is checked after QOS Ingress
 > (i.e. in INPUT ROUTING), which seems the logical way, is it possible (with a
 > patch???) to check the tc_index in "ip rule"? This would make it possible to
 > let the output of the QOS ingress participate in the policy routing.

As I understand:

After Julian observation I believe first diagram was wrong; after reading
again "IPROUTE2 Utility Suite Howto" as follows:

---
Rules in routing policy database controlling route selection algorithm.

Classic routing algorithms used in the Internet make routing decisions based
only on the destination address of packets and in theory, but not in practice,
on the TOS field. In some circumstances we want to route packets differently
depending not only on the destination addresses, but also on other packet
fields such as source address, IP protocol, transport protocol ports or even
packet payload. This task is called "policy routing".

To solve this task the conventional destination based routing table, ordered
according to the longest match rule, is replaced with the "routing policy
database" or RPDB, which selects the appropriate route through execution of
some set of rules. These rules may have many keys of different natures and
therefore they have no natural ordering excepting that which is imposed by the
network administrator. In Linux the RPDB is a linear list of rules ordered by
a numeric priority value. The RPDB explicitly allows matching packet source
address, packet destination address, TOS, incoming interface (which is packet
metadata, rather than a packet field), and using fwmark values for matching
IP protocols and transport ports.
-

ip rule is input routing (and has access to TOS field).

Reading from Almesberger.-

--
1) DSCP are the upper six bits of DS field.
2) DS field is the same as TOS field.
--

Reading ip rule code in iproute2 package from Kuznetsov.-

-
1) ip rule use iprule_modify function to set rules.
2) iprule_modify use rtnetlink calls thru libnetlink. Structure rtmsg is
used as a channel to interchange information.
3) One of the fields of rtmsg structure is rtm_tos.
4) You have access to check this octet thru ip rule "tos TOS" selector.
-

Also from Differentiated Services on Linux (Almesberger) - 06/1999:


   When using "sch_dsmark", the class number returned by the
   classifier is stored in skb->tc_index. This way, the result can be
   re-used during later processing steps.

   Nodes in multiple DS domains must also be able to distinguish
   packets by the inbound interface in order to translate the DSCP to
   the correct PHB. This can be done using the "route" classifier, in
   combination with the "ip rule" command interface subset.
---

I hope this can answer your question; any feedback from experience
people on the list is welcome.

 > FYI, there is a iptables patch out there, called mangle5hooks, so the mangle
 > table registers all 5 netfilter hooks. This implies that the mangle table
 > has 5 chains instead of 2, PREROUTING, INPUT, OUTPUT, FORWARD and
 > POSTROUTING.

I will try to update the diagram. These words from Julian scare me a little:


 It was JFYI. I\'m not sure whether we can find a place in the
diagram for all programs in the world that are using NF hooks :) Of course,
you can go further and to build a jumbo picture of the NF world :)


Best regards,

Leonardo Balliache


___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



[LARTC] kernel packet traveling diagram

2002-06-27 Thread Leonardo Balliache

Sorry everyone, I'm late.. always!!

Here is a new version of kernel packet traveling diagram. Thanks a lot
to Julian Anastasov by his comments (notes at the end as I understood).

I insist it's nice to have this diagram ready and updated.  Look that
when Jan Coppens needed to tell us where he does need to mark packets
he just said "At this point I should need another mangle table->" using
the diagram as reference.

We understand that internal kernel code is complex and interlaced and
not always is possible to identify clearly each part of it in a simple
diagram. But we keep on trying.

I've got some comments:

1) I didn't know there was ipchains code in kernel 2.4; I supposed
iptables new code replace totally old ipchains code. Any feedback
about it would be useful.

2) Below I enclosed a link to an article from Harald Welte "The journey
of a packet through the linux 2.4 network stack"; it could help to the
discussion and getting an improved diagram, if it's possible.

http://www.gnumonks.org/ftp/pub/doc/packet-journey-2.4.html

3) TODO: include LVS in the diagram. Julian give us this link to study the
issue and trying to complete the diagram.

http://www.linuxvirtualserver.org/Joseph.Mack/HOWTO/LVS-HOWTO-19.html#ss19.21

4) Of course, diagram is ready to be shoot it off. Any comment, criticism,
etc,. is welcome.

Best regards,

Leonardo Balliache



 Network
 ---+---
|
+---+--+
|mangle|
|  PREROUTING  | <- MARK REWRITE
+---+--+
|
+---+--+
|  nat |
|  PREROUTING  | <- DEST REWRITE
+---+--+
|
+---+--+
|   ipchains   |
|FILTER|
+---+--+
|
+---+--+
| QOS  |
|   INGRESS| <- controlled by tc
+---+--+
|
 packet is for  +---+--+ packet is for
 this address   | INPUT| another address
 +--+ROUTING   +---+
 |  |+ PRDB|   |
 |  +--+   |
 +---+--+  |
 |filter|  |
 |INPUT |  |
 +---+--+  |
 | |
 +---+--+  |
 |Local |  |
 |   Process|  |
 +---+--+  |
 | |
 +---+--+  |
 |OUTPUT|  +---+---+
 |ROUTING   |  |filter |
 +---+--+  |FORWARD|
 | +---+---+
 +---+--+  |
 |mangle|  |
 |OUTPUT| MARK REWRITE |
 +---+--+  |
 | |
 +---+--+  |
 | nat  |  |
 |OUTPUT| DEST REWRITE |
 +---+--+  |
 | |
 +---+--+  |
 |filter|   

[LARTC] Microsoft is working...

2002-06-02 Thread Leonardo Balliache

Have a look to this article I've taken from The Washington Post. Microsoft 
is making some Lobbies
Hard Against Free Software. They deleted the article already but I took a 
copy before.

http://opalsoft.net/qos/wpost/Washtech_com.htm

Best regards,

Leonardo.-


___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



[LARTC] Connection attempt to UDP port 137

2002-06-01 Thread Leonardo Balliache

Hi Naim:

Check with "ps ax" if you have samba daemon running. Look for smbd and nmbd 
servers.

Best regards,

Leonardo Balliache


Naim wrote:

were too many connections attempt to my server at port 137 with UDP and 
TCP, when I look into log files. In file /etc/services it says
netbios-ns 137/tcp #NETBIOS Name Service
netbios-ns 137/udp #NETBIOS Name Service
Im not sure what this services for, so I disbale it. Im using unix FreeBSD 
4.5-STABLE ,AFAIK this services is Microsoft prorietary so anyone can 
explain what this services for in unix environment or is there any remote 
exploit on it?
TIA
-Naim


___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



[LARTC] dynamic routing

2002-05-31 Thread Leonardo Balliache

Hi Andreani,

I don´t think so. With 3 PCs you can set 3 routers but you also need some 
client to complete your test. I walked through this way some time ago and I 
really can´t create any useful scenario.

If you have better luck give a feed back.

Best regards,

Leonardo Balliache

 >I want to test the linux features as dynamic router.
 >In particular I want to run gated and zebra.
 >Is it possible, with two or three PCs, to configure a test scenario and
 >evaluate these features?
 >Best regards
 >Luca Andreani


___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



[LARTC] 2 NICS. More Bandwidth?

2002-05-31 Thread Leonardo Balliache

Hi, Svavar :

I have set bonding and it works fine... but..

Before spending your money try to figure out first what your system 
bottleneck is...

I purchased another NIC and installed bonding only to discovered that my 
bottleneck was the hard disk's server.

Best regards,

Leonardo.-



___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



[LARTC] zebra documentation

2002-05-23 Thread Leonardo Balliache

Hi Andreani:

Nothing. Just some bad written and worst explained document out there in 
the web.

I have an old html document about gated (routing daemon older than zebra: 
rip, rip ii, ospf, isis, etc). If you want it e-mail me and I attached to 
you. Also for protocol information try with cisco at 
http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/index.htm.

Best regards,

Leonardo.-




___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/



[LARTC] Routing from a box behind two NAT'ing routers

2002-05-22 Thread Leonardo Balliache

Hi,

iproute2 has a command that perhaps could help you.

ip route add default scope global equalize nexthop dev ppp0 \
nexthop dev ppp1

(actually if you know peer addresses of ppp* use it instead).

I've never tried with it but it´s a possibility.

You have to have two NIC in your web server; replace ppp0 and ppp1 with 
eth0 and eth1. Also have a look to Alexey iproute2 because he says that 
this command equalize load through the 2 NIC. Connect each NIC to each 
incoming line using 2 different address space.

Give a feedback telling us your experiences.

Best regards,

Leonardo Balliache


___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/