[LARTC] Packet dropping schemes

2006-12-28 Thread Jonathan Day
There are a VERY large number of packet dropping
schemes in existence, of which some have been
implemented for Linux and others have implementations
in Open Source environments that could probably be
ported.

I thought I'd be a nuisance and list the schemes I
know of and the status (as far as I know it). What I
would like is if people who (a) know of
implementations that should be here could add them,
and (b) know of compelling reasons why a scheme should
NEVER (or almost never) be used could give the reason.

The problem I'm having is that with 17 different
schemes, I can only find Open Source implementations
of three, and one of those is only for *BSDs. If for
no other reason than Linux makes network research
relatively trivial, I have to believe that the other
algorithms are either in public patches that hardly
anyone knows about, OR there is a catastrophic flaw of
some kind that makes using them in a general-purpose
OS a Really Bad Idea.

So, where are they and/or what is the problem with
them?


RED (Implemented as a queue)
Generic RED (Implemented as a queue)
Stabilized RED
Fair RED
Adaptive RED
Gentle RED
Exponential RED
RED+
BLUE (BSD implementation)
Stocchastic BLUE
BLACK
GREEN
PURPLE
WHITE
CHOKe
MAFIC
HAWK

(For those who have got this far, MAFIC and HAWK are
intrusion/attack countermeasure dropping schemes and
look very intriguing.)


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] esfq ? or wrr ?

2005-10-14 Thread Jonathan Day
I think it depends on the type of traffic you're
expecting from the different users. If you're
expecting very similar patterns of behaviour, then my
guess would be ESFQ would be the better.

If, on the other hand, the network load is going to
shift over time, between the users, then WRR would
seem the more logical choice.

You might also want to look at HFSC (Heirarchical Fair
Service Curve) - it's possible you might be able to
get what you want from the single algorithm, rather
than piping through several. The fewer layers you
have, the less latency you'll introduce. HFSC also has
the advantage that it is standard in the kernel, so
likely has better testing.

ESFQ and WRR have been forward-ported, well,
sometimes, but only the combined -qos patch seems to
be current - the individual patches don't seem to be
maintained at all.

I would like to see the patches cleaned up (as
necessary) then submitted for merging into the
mainstream kernel. Linux' QoS code is in frankly
horrible shape at the moment, so anything that stirred
interest in it would almost have to be a good thing,
even if the patches themselves didn't get included any
time soon.

--- LinuXKiD <[EMAIL PROTECTED]> wrote:

> Hi
> 
> If I have a HTB class with 128kbit, and I want to 
> put "N" users in that class ( in order to share 
> bandwidth fairly ) , 
> 
> which is better for me ?  esfq (hash dst)  or wrr ?
> 
> I would attach esfq or wrr to HTB parent class.
> 
> Also I've readed on Jim script that over WRR put
> a RED qdisc, but I don't understand it.
> 
> bests
> 
> andres
> ___
> LARTC mailing list
> LARTC@mailman.ds9a.nl
>
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
> 




__ 
Start your day with Yahoo! - Make it your home page! 
http://www.yahoo.com/r/hs
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] HOWTO unmaintained?

2005-08-17 Thread Jonathan Day
It seems strange that astronomers discovered a whole
set of Black Holes at about the time the maintainers
vanished...

It looks like a number of people are offering sites -
IMHO, a "distributed" wiki (ie: you can edit at any of
the sites) or a master/mirror setup would be good, as
that would help prevent problems if site maintainers
get kidnapped by aliens, sites get slashdotted, etc.

It would also be good if at least one site offered
multiple ways to connect - eg: via an IPSec tunnel or
via IPv6 - as this would give people a simple way of
testing what they're trying.

--- Kenneth Kalmer <[EMAIL PROTECTED]> wrote:

> On 8/17/05, Ed W <[EMAIL PROTECTED]> wrote:
> > 
> > >I guess the obvious question then is: How do we
> get it maintained?
> > >
> > >Does anyone know where the current maintainers
> have disappeared?
> > >
> > >Is anyone willing to take over that job?
> > >
> > >
> > 
> > I wonder if someone would host a mediawiki and
> consider uploading the
> > documentation there.  This would make it easier
> for people to
> > contribute, and I think it shold be fairly easy to
> convert from it's
> > current format to a wiki
> > 
> > Just a thought
> > 
> 
> And a great one I might add. Does anybody know how
> busy the current
> site is? If not too busy (i.e.< 10GB a month) I'd
> gladly put up a wiki
> on my server for it. If it get's busier I'll just
> have to move it to
> another server in due course.
> 
> I've also gotten very frustrated with some old
> outdated information,
> and especially the lack of information regarding the
> 2.6.x kernel.
> 
> All in favour...?
> 
> Regards
> 
> -- 
> 
> Kenneth Kalmer
> [EMAIL PROTECTED]
> 
> [EMAIL PROTECTED] stats
>
http://vspx27.stanford.edu/cgi-bin/main.py?qtype=userpage&username=kenneth%2Ekalmer
> ___
> LARTC mailing list
> LARTC@mailman.ds9a.nl
>
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Latency of Linux Bridge

2005-07-22 Thread Jonathan Day
Hi,

It becomes possible to play with a bunch of
CPU-related timers in the 2.6.13-rc series, which MAY
help (but no guarantees). The latest tree also has
some scheduling fixes which probably won't make much
of a difference to you.

Standard distro kernels tend to be compromises, which
means they'll be OK for everything but great at
nothing. If you want to squeeze every last bit of
performance out of the machine, you'll need to do some
kernel configuration work.

The latest "vanilla" kernel is 2.6.13-rc3 and the
latest Andrew Morton release is 2.6.13-rc3-mm1

(The differences that MAY be useful to you is that the
-mm release has some driver fixes for ethernet cards.)

If you roll your own kernel and are wanting to use it
for a bridge setup, my guess is you want to use the
server settings for preempt (no forced preemption -
ie: pretty much disable it) but would likely want to
use the desktop settings for the timer frequency (1000
Hz) as that gives the fastest response to events. (If
you're using an SMP machine, 250 Hz might be better,
as SMP doesn't like lots of interrupts.)

Depending on who you ask and what phase the moon is
in, different people give different opinions about
whether to compile in or use modules. Compiled-in
drivers MAY be marginally faster and MAY eat
fractionally less kernel memory, which MIGHT trim down
latency a little.

If that's not serious voodoo enough, don't compile in
any network layers you're not using. Every layer is
absolutely going to add latency, because it is extra
code to run.

Finally, and this is going to be the hardest step, it
MAY be possible to get the Linux kernel to compile
with the latest Intel C compiler. If you're using a
genuine Intel processor, the speedup is something like
40% - for AMD or other ix86 processors, GCC is either
equal to or faster than Intel's compiler. The problem
with using Intel's C compiler is that it has very
different ideas on what is ok than GCC, so the kernel
won't necessarily compile. Sometimes people put icc
patches out, to fix this, but not all kernels have
patches and the kernel of interest is a pre-release,
making patches less likely in the event they are
needed.

Any of these steps should trim a little latency off,
and if you somehow manage all of them, you should get
quite a decent improvement. Whether the improvement is
worth the effort required is another matter.

--- "Christian Konecny (VI/SEA)"
<[EMAIL PROTECTED]> wrote:

> Hi there!
> 
> I am working a lot with VoIP in my company, so I
> thought to use linux bridge functionality together
> with tc to emulate delay, jitter, packet loss,
> duplication, reordering etc. for testing purposes in
> our lab against our VoIP products.
> I just recognized, that a basic bridge just with
> it's minumum configuration of 2 network interfaces
> creates latency of approx. 5ms on very low traffic.
> This seems to be independent on CPU speed. I tried
> on 2 GHz PC while having just 64kBit traffic with
> packet size of about 300bytes.
> I am using Knoppix 3.82 which is actually a debian
> Live-CD Linux, Kernel 2.6.11.
> For some reason they put iproute2 041019 on this
> distro, which is intended to be used for kernel
> 2.6.9.
> I am aware of remastering the CD, but have to check
> if it is possible to recompile the kernel for the
> remaster.
> 
> back to my question: where does this latency come
> from?
> "top" shows almost no load while the bridge is
> handling traffic, so how come?
> is there some timer-granularity which can be set in
> the kernel, is the latency normal, or what could
> cause it else?
> 
> Thank you very much in advance!
> 
> /Christian
> ___
> LARTC mailing list
> LARTC@mailman.ds9a.nl
>
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] QOS HELP PLEASE

2005-07-11 Thread Jonathan Day


--- "Martin A. Brown" <[EMAIL PROTECTED]>
wrote:

> 
> Dariusz,
> 
>  : so the sum of all rates of speeds of classes for
> the clients should be 
>  : less than the rate of the class 1:2 ? or i
> understand it badly ?
> 
> Indeed, you understand correctly.  Your client
> classes are leaf classes.
> 
>   - An HTB leaf class guarantees  access.
>   - Above , the leaf class will borrow (from
> parents) up to .
> 
> This bears repetition:  the guaranteed total of
> bandwidth, before HTB 
> shaping and borrowing begins, is the sum of the
> rates of the leaf classes.

(snip)

Can it ever be truly equal? There is going to be some
overhead in having the multiple layers, so although
the sum of the rates at level N can never exceed the
rate of the parent layer at N-1, the penalties must
mean that it must be marginally less (even if this is
so marginal as to be hard to detect).




__ 
Yahoo! Mail 
Stay connected, organized, and protected. Take the tour: 
http://tour.mail.yahoo.com/mailtour.html 

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] wrr question

2005-06-07 Thread Jonathan Day
Hi,

For something like this, where you're wanting to do
bandwidth capping, you're probably better off with
something like CBQ, which supports limits.

It sounds like you want soft limits of 4% (a fair
slice, when 25 users are present) and hard limits of
25%.

Another option would be to use WRR and then use
pattern-matching in Netfilter to set the hard limit.

Part of the problem is that there are a very large
number of "Quality of Service" protocols, of which
Linux supports some, but that there is no really clear
cheat-sheet on what to use when, what works well with
what, and what capabilities each QoS method has.

Jonathan

--- Kenneth Kalmer <[EMAIL PROTECTED]> wrote:

> Guys
> 
> All the recent discussions recently, and the
> knowledge of a 2.6 port,
> of WRR has made me very keen on trying it. I had a
> look at the docs
> and examples know but my mind is not in a very
> receptive state.
> 
> Take this simple example.
> 
> Incoming internet connection of 1mbps. Shared
> between up to 25 users
> simultaneously.
> 
> I know that WRR can fairly distribute the traffic
> amongst the
> currently connected clients at any specific time.
> I'd like to know how
> can I restrict any client from getting more than
> 256kbps (or 25%) of
> the total link speed, even when they are the only
> users.
> 
> Kind regards
> 
> -- 
> 
> Kenneth Kalmer
> [EMAIL PROTECTED]
> http://opensourcery.blogspot.com
> ___
> LARTC mailing list
> LARTC@mailman.ds9a.nl
>
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Routing inside box, is it posible ?

2005-05-26 Thread Jonathan Day
For anyone to answer that clearly, we'd need a bit
more information.

Firstly, what is connected to router1 and router2? In
other words, does the router box you want to use have
enough ports (and of the right type) to actually
handle all of the connections?

Secondly, can we assume that you have enough access to
the box you want to use to be able to install the
software and configure it? (This may sound an obvious
question, but you'd be amazed how many people don't
think of things like that.)

Thirdly, are there any performance constraints? In
other words, moving networks from router2 to router1
may impact router1's performance, as it now has to
directly process any routing protocols from all
subnets router2 listened to, and search a potentially
longer and more complex routing table.

Now, if it's a "trivial" network - router1 only
connects to two other devices - the outside network
and router2, and router2 likewise connects only to two
devices - router1 and the inside network, then you can
certainly drop one of the two routers. (In fact, in
such a simple network, you could probably have a
trivial computational device with a static routing
table. Dynamic routing isn't needed, if nothing
changes. Some switches support basic routing, and a
switch would offer much less latency than a router.)

--- "Rio Martin." <[EMAIL PROTECTED]> wrote:
> Dear folks,
> I am thinking about routing inside my router box. Is
> it posible ?
> 
> INTERNET --> [eth0] ROUTER1 [eth1] --> [eth0]
> ROUTER2 [eth1] --> USERS
> 
> I want to make it simple just like this:
> 
> INTERNET --> [eth0] ROUTER [eth1] --> USERS
> 
> Please give me some clues, thanks before ..
> 
> Regards,
> Rio Martin.
> ___
> LARTC mailing list
> LARTC@mailman.ds9a.nl
>
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
> 

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] New queing discipline

2005-05-16 Thread Jonathan Day
Hmm. Personally, I would not implement a single queue
with multiple algorithms for parsing it. I  can see
all sorts of possible race conditions arising out of
such an approach.

It would seem to be more logical to start by splitting
the traffic into multiple queues, according to the
type of traffic, which you can do with some sort of
classful queueing scheme, such as CBQ. You'd then
process each queue in parallel, using a drop mechanism
specific to that type of traffic.

If you're wanting to differentiate by type of
application, I'd suggest looking at the layer 7
classifier patches.

There are probably QoS schemes that are ideally suited
to what you want, and I'd suggest looking through the
research papers for what has been done so far, and
also to look through the BSD ALTQ code to see what has
been implemented outside of Linux. The usual advice is
to get the most for the least effort, so if someone
has already solved this problem, you would be advised
to build on their solution rather than to reinvent the
wheel.

--- Rahul Hari <[EMAIL PROTECTED]> wrote:
> I want to implement a new queuing discipline for the
> tool tc. The new
> queuing discipline would support the application of
> multiple threads
> on the same queue with different kinds of traffic.
> Each kind of packet
> will have its own drop probability but while
> calculating the average
> queue length,  the no. of packets in the queue will
> be equal to the
> sum of the individual no. of packets(of the
> different kinds of
> traffic) in the queue.
> It would be great if someone could send me names of
> the files which I
> will have to modify and if possible links to any
> documentation that
> might be available for those files.
> If this queuing policy already exists, please let me
> know about its
> name and any links that might be helpful in
> understanding the policy.
> Please note that I have tried using GRED, but it
> does not fulfill my
> requirement.
> Regards,
> Rahul
> 
> -- 
> Rahul Hari
> Junior Under Grad. Student,
> Department of CSE,
> ITBHU,
> Varanasi.
> ___
> LARTC mailing list
> LARTC@mailman.ds9a.nl
>
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
> 



__ 
Do you Yahoo!? 
Read only the mail you want - Yahoo! Mail SpamGuard. 
http://promotions.yahoo.com/new_mail 
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] routing decisions

2005-01-06 Thread Jonathan Day
Is packet forwarding enabled on the box you're using
as a gateway?

--- Payal Rathod <[EMAIL PROTECTED]>
wrote:

> Hi,
> I have Mandrake 10.0 gateway with internet via.
> ppp0. Also, another
> machine 192.168.0.4 is always connected to net via.
> a dial-up
> modem. Now I want to allow a machine (192.168.0.2)
> in my LAN to
> access net through 192.168.0.4. So according to
> lartc howto I did,
> # echo 200 John >> /etc/iproute2/rt_tables
> # ip rule add from 192.168.0.2 table John
> # ip route add default via 192.168.0.4 dev eth0
> table John
> # ip route flush cache
> 
> But still 192.168.0.2 cannot access internet.
> tracert shows that
> the traffic is coming to my Linux gateway and then
> going nowhere.
> I have not changed anything in 192.168.0.2
> 
> What steps am I missing?
> 
> Waiting eagerly for any help on this.
> With warm regards,
> -Payal
> ___
> LARTC mailing list / LARTC@mailman.ds9a.nl
> http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO:
> http://lartc.org/
> 




__ 
Do you Yahoo!? 
Meet the all-new My Yahoo! - Try it today! 
http://my.yahoo.com 
 

___
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] TC GUI or graphs?

2005-01-05 Thread Jonathan Day
There are GUIs for HTB, but no guarantee they'll work
with current systems or with what you want to do.

http://freshmeat.net/projects/easyshape/
http://freshmeat.net/projects/khtb/
http://freshmeat.net/projects/ktctool/
http://freshmeat.net/projects/bwmtools/
http://freshmeat.net/projects/htbgui/
http://freshmeat.net/projects/arbitrator/
http://freshmeat.net/projects/ibmonitor/

P.S. I rarely use Google to search for software. :)

--- Jason Boxman <[EMAIL PROTECTED]> wrote:

> On Wednesday 05 January 2005 09:55, Deepak Seshadri
> wrote:
> > Hello everybody,
> >
> > I am new to the lartc mailing list. I have been
> using "tc" for some time
> > now. To be precise, tc & HTB to shape traffic. I
> did a lot of search on
> > Google for 2 things:
> >
> > - A GUI to create configure new qdiscs & classes
> for HTB
> 
> There are two projects, the one I remember being
> lql, designed at creating 
> libraries for plugging into netlink directly for QoS
> stuff.  One of these 
> days there will probably be a nice GUI available. 
> Presently I don't know of 
> any.




__ 
Do you Yahoo!? 
Yahoo! Mail - now with 250MB free storage. Learn more.
http://info.mail.yahoo.com/mail_250
___
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] Sharing/splitting bandwidth on a link while bandwidth of the link is variable (or unknown) ?

2005-01-05 Thread Jonathan Day
In the past, people played with routing protocols such
as HELLO and FUZZBALL which reacted to the latency on
each link. They gave up. It turns out that overly
reactive systems are not that useful. The gains are
dubious, and the costs on resources are high.

The other factor is in what the original poster meant
by an "exact" split. Exact over what timeframe? Over
any given instant, only one packet is being sent (it's
a serial stream) so the split over that amount of time
is always 100% of whatever it's doing.

On the other hand, if it's an exact split over a
fairly long timeslice, you use a class-based queueing
system and measure what's been sent out of each queue.
You then predict what the net bandwidth is over the
whole timeslice, by looking at what gets sent and what
gets dropped. Each time you adjust the prediction, you
adjust the hard limits for the queues to allow out
whatever is left of that class' net bandwidth.

A third approach is to see what you can do with ECN
and other back-propogating QoS protocols to throttle
given queues that reach or exceed their limits. That
way, you don't need to care what the bandwidths are at
any given time, because the primary router that
divvies up the bandwidth can throttle in proportion to
how it does that division.

--- Rene Gallati <[EMAIL PROTECTED]> wrote:

> Hello,
> 
> > I want to  share/split bandwidth on a link with
> unknown bandwidth. I 
> > want to exactly
> > share/split bandwidth (for example : FTP 30% ,
> HTTP 20% or 30% for 
> > a group of PCs and so forth.)
> >  
> > "Traffic-Control-HOWTO" talk that PRIO scheduler
> is an ideal match for 
> > "Handling a link with a variable (or unknown)
> bandwidth".
> >  
> > But PRIO scheduler can not exactly share/split
> bandwidth .
> >  
> > Could you tell me if  I can exactly share/split
> bandwidth on a link with 
> > a variable (or unknown) bandwidth?  If it is
> possible, how can I do that ?
> 
> [Warning irony ahead]
> I'll give you a complete script if you tell me how
> many bits/sec exactly 
> 30% of unknown is.
> [/irony]
> 
> In other words: You don't know how much there is
> available, I don't know 
>   it, the list doesn't know it and your computer
> can't know it either. 
> So no - that's not possible (and should be evident,
> hopefully)
> 
> What you CAN do is let some ping run alongside and
> react to changes in 
> the latency it sees across the link - then adapt the
> script and thus 
> changing the parameters. This needs lot of
> experimentation, is a bad 
> hack but maybe it is sufficient for what you are
> trying to achieve.
> 
> Otherwise, find a minimum value of bandwidth you
> never drop below and 
> set that as the maximum bandwidth available for your
> root qdisc. This 
> gives you the predictability.
> 
> Or : find a better line/ISP. Find and drop abusive
> users/applications.
> 
> But all in all, there's not much you can actually do
> in your situation.
> -- 
> 
> C U
> 
>   - --  - -/\/  René Gallati 
> \/\ - --- -- -
> 
> ___
> LARTC mailing list / LARTC@mailman.ds9a.nl
> http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO:
> http://lartc.org/
> 




__ 
Do you Yahoo!? 
Yahoo! Mail - Easier than ever with enhanced search. Learn more.
http://info.mail.yahoo.com/mail_250
___
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] Suggestion - table of QoS mechanisms

2005-01-04 Thread Jonathan Day
The URL for the guide was useful, thanks.

Here are a few other QoS systems for Linux:

RSVP is provided in the stock kernel. This allows you
to reserve a given amount of bandwidth for a specific
UDP data stream. It is typically not used in "the real
world" because it doesn't scale well. Too much state
information needs to be transmitted and kept track of,
to be useful on backbone routers.

USAGI is based on KAME, and KAME supports ALTQ. In
turn, ALTQ supports HFSC, JoBS, RIO and BLUE for both
IPv4 and IPv6. It is NOT clear from the USAGI web page
as to whether ALTQ is included in their code.
http://www.linux-ipv6.org/
http://www.csl.sony.co.jp/person/kjc/kjc/software.html

QLinux supports H-SFQ, but is based on Linux 2.2 and
the 2.4 sources don't seem to have ever been released.
http://lass.cs.umass.edu/software/qlinux/

DGT2684 (seems to be dead, unless the pseudo-QoS for
ATM in the Linux kernel is based on this, but then the
code on Sourceforge should be current, you'd have
thought)
http://sourceforge.net/projects/dgt2684

I'm not altogether sure what SIMA did, but it seems to
have been a queueing system of sorts for the 2.2
kernels.
http://www.atm.tut.fi/faster/sima/

It's a cheat, but you can route traffic onto and off
Network Simulator and therefore use any QoS devices
available for that for regular networking. This
includes Fair Queueing, Stochastic Fair Queueing and
Deficit Round Robin, by default. Many of the ALTQ
routines have NS implementations, as well, and I'm
sure there are others. NS seems to be popular with
protocol researchers.
http://www.isi.edu/nsnam/ns/

There's also a QoS Library which provides a useful API
for applications.
http://www.coverfire.com/lql/

Finally, I also mentioned SGI's STP patch. STP allows
you to reserve network resources for a future data
stream. As far as I can tell, it is very similar in
concept to RSVP, except that it is not UDP-specific
and is specifically designed for very high-speed
networks, where constructing and destructing
connections at the time of use can add excessive
latency. By pre-allocating, the connection can all be
set up and ready to use when it is actually needed.

--- Jason Boxman <[EMAIL PROTECTED]> wrote:

(snip)
> Possibly.
> 
> I only know of CBQ, HTB, HFSC, SFQ, TBF, PFIFO,
> PRIO, G/RED for Linux offhand.
(snip)



__ 
Do you Yahoo!? 
Take Yahoo! Mail with you! Get it on your mobile phone. 
http://mobile.yahoo.com/maildemo 
___
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] Suggestion - table of QoS mechanisms

2005-01-04 Thread Jonathan Day
Hi,

A thought for the list. As I mentioned in another
posting, there are a lot of QoS mechanisms out there.
Linux supports some, but not all. Some patchsets add
others, but don't work for all kernels. There are also
userland implementations, usually sitting in software
routers, but there are other packages.

Would it be helpful if I worked on a table of what's
out there for Linux and in what form?

The main drawback of such a list is that while I can
tell you if such-and-such an implementation exists,
that doesn't mean the implementation is any good, or
that the QoS concept is valid. There are plenty of
arguments amongst QoS researchers as to whether RED is
useful or not, and those are the people most qualified
to know the answer. Nor would I be able to verify what
kernel patches work well together, so the individual
existance of specific mechanisms doesn't mean you can
combine them usefully.

On the other hand, there doesn't seem to be any easy
way for people to find out what does exist, what
doesn't exist YET for Linux but could easily be
written, or what used to exist but has been dropped
for reasons known or unknown.

For example, SGI's "Scheduled Transfer Protocol",
MPLS, WRR and ESFQ are all examples of networking
algorithms that are apparently deceased. The Layer 7
packet classifier isn't dead, but doesn't apply
cleanly to kernels 2.6.9 or later.

Finding these can be fun, too. I've got a copy of the
Scheduled Transfer Protocol patches, but that's
because I downloaded them while they were still on
SGI's FTP site. If they exist anywhere on the Internet
today, I haven't the foggiest where. The site for ESFQ
is dead, and the only known patches forward-ported to
recent kernels is merged into the qnet patch series,
making it hard to extract.

Any thoughts on this?

Jonathan




__ 
Do you Yahoo!? 
Read only the mail you want - Yahoo! Mail SpamGuard. 
http://promotions.yahoo.com/new_mail 
___
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] Scheduler Mechnisms!

2005-01-04 Thread Jonathan Day
I may be wrong on this, but I believe that RED can be
attached to any queueing system, including the basic
FIFO queues. In a sense, you're still using a
scheduling system, when using the default arrangement,
it's just a first-come, first-served one.

RED is classless and applies to the whole of a queue.
What that queue is attached to, if I understand it
correctly, isn't important. It can be a class, but it
can just as easily be everything going through that
device.

Again, someone correct me if I'm wrong, but as I
understand it, there are four levels to the whole
QoS/diffserv concept.

One of these levels is the queueing discipline. This
can be something like CBQ, WFQ, FIFO, PRIO, or
whatever. This is how the data is organized, it does
not describe how the data is sent. In the case of
something like CBQ, you have a defined set of queues
in parallel, with rules as to what packets fall into
what queue. On the other hand, queueing schemes such
as FIFO are flat. There's a single queue that
everything goes through, though there may be different
rules for how things get pushed to it.

Another level is the scheduling mechanism. This
describes how the data is sent, once organized, but
does not describe the organization itself. If you've
only one queue, then there's really not much to
schedule. If you've multiple queues, then it's fairly
normal to use "round robin" or "weighted round robin"
to pick which queue to pull a packet from. Linux' CBQ
uses "weighted round robin", according to the C file.

The next level is the packet dropping mechanism. When
queues flood, packets are going to be dropped. There's
nowhere to store them. I'm pretty sure the default
behaviour is to simply continue accepting packets, but
to drop any that expire before being sent or which
fall off the end of the queue (if the queue is
bounded). RED, GRED, and a whole host of similar
mechanisms, try to drop packets in a more controlled
manner. However, that is really all they do.

Finally, there are mechanisms for damping overly
active applications, such as ECN. The idea here is
that if you throttle back whatever is generating
excess traffic, you don't get the problems assoicated
with dealing with it. The "default" behaviour is to do
nothing.

When setting up QoS - on Linux or anything else - you
basically pick one of each of the four categories to
assemble a packet delivery system. Even without QoS,
you're doing that, you're just using the defaults in
all cases. The mechanisms are still going to be there.

The Linux configuration menu does NOT match the above
terminology, or the terminology in the source code.
Thus, the source code identifies CBQ as a queueing
discipline, but the configuration menu calls it a
scheduler. The QoS help is also not very helpful, as
it mostly tells people to look at the source. However,
if you look at the source for CBQ or RED, for example,
the explanation is relative to the cited papers, so
you then have to go and read those before coming back
and doing anything.

This is one area I hope is going to get resolved in
the reasonably near future. If not, I might have to
come up with a patch myself. The very thought of that
should send shivers down the spines of any kernel
developers out there.

Jonathan

--- Zhenyu Wu <[EMAIL PROTECTED]> wrote:

> Thank you very much, i will try to find these papers
> which must be very helpful
> for me. The "more" means that whether there are
> other mechanisms not only for
> Linux. Sorry, i have not make it clear! Sometimes, i
> wonder whether the qdiscs
> such as CBQ, RED, GRED ... are belong to the
> scheduler mechanisms in linux
> enviroment. For example, In Red, which i can find
> are enqueue, and dequeue so,
> if i add a RED qidsc to a class, must i add a
> scheduler mechanism so that i can
> decide which packet in the queues will be scheduled
> and put to the link?
> 
> Good luck,
> Best,




__ 
Do you Yahoo!? 
The all-new My Yahoo! - What will yours do?
http://my.yahoo.com 
___
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] ESFQ?

2005-01-04 Thread Jonathan Day
To the best of my knowledge, ESFQ for Linux is
essentially dead. There's a patchset - QNet - which
does port ESFQ to the 2.6.8/2.6.9 kernels, but ESFQ is
not split out, so it looks like an all-or-nothing
deal.

http://kem.p.lodz.pl/~peter/qnet/

I don't know if QNet is still being maintained - the
last update on the page refers to October 2004 - and
there's nothing to indicate how well the forward ports
actually work in practice.

A search using Google shows only older ESFQ versions
(one for 2.6.0-test11, for example) but nothing newer.

There was one posting about ESFQ to the kernel
developers mailing list, but I couldn't see any
follow-ups to it. Nor does it appear to be in Andrew
Morton's patchset (an excellent indicator of interest
level and the probability of ending up in the official
kernel).

Unfortunately, this seems to be fairly common in Linux
QoS - too many one-man projects and too few resources
too keep them going.

--- Justin Schoeman <[EMAIL PROTECTED]> wrote:

> Hi again,
> 
> I was just looking around for ESFQ sources, and I
> see that the main site 
> is down, and only has kernel 2.6.4 patches.
> 
> Is ESFQ maintained?  If so, where can I find patches
> for 2.6.10?
> 
> Thanks,
> -justin
> ___
> LARTC mailing list / LARTC@mailman.ds9a.nl
> http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO:
> http://lartc.org/
> 





__ 
Do you Yahoo!? 
Yahoo! Mail - You care about security. So do we. 
http://promotions.yahoo.com/new_mail
___
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] Scheduler Mechnisms!

2005-01-03 Thread Jonathan Day
It depends on what you mean by "more". More for Linux?
Certainly. HTB3 seems to be a popular mechanism, and
you can use IMQ to set up an intermediate device to
allow shaping of both inbound and outbound traffic,
using one (or more!) scheduling mechanisms in series.

(In fact, there are several versions of IMQ out there.
I've given links to both the projects that seem to be
alive, but there may be more.)

There's also ESFQ, but there doesn't seem to be much
active work on that. There are forward ports to recent
Linux kernels, though. QLinux has a version of H-SFQ
for Linux, but again it seems to be getting long in
the tooth. Unfortunately, I can't find any forward
ports of that.

http://luxik.cdi.cz/~devik/qos/htb/
http://www.linuximq.net/
http://pupa.da.ru/imq/

http://www.digriz.org.uk/jdg-qos-script/#qos-2.6
http://kem.p.lodz.pl/~peter/qnet/
http://lass.cs.umass.edu/software/qlinux/

There are a great many systems that I can't find a
Linux version of. Cisco routers support something
called "Class-Based Weighted Fair Queueing" (CBWFQ)
which seems to be a hybrid of classful and classless
scheduling. Cisco also has two versions of ECN, for
forwards and backwards propogation.

I've listed below a number of papers detailing various
QoS schemes. Some of these have been implemented in
other OS' (the BSDs tend to get a lot of this stuff
implemented quickly for them as part of ALTQ) and some
I've never seen an implementation at all. However, the
papers should all give enough information to write a
version for Linux.

Note: ALTQ can be found at:
http://www.csl.sony.co.jp/person/kjc/kjc/software.html

Please note that this list is not exhaustive. Rather,
I got exhausted after trying to find what was out
there and what state it was currently in. QoS is a big
field, if the number of papers is anything to go by.
Linux only touches the fringes of it. If anyone feels
particularly bored, or in need of a good ego boost, it
would be cool to see if a reasonable selection of
these could be introduced into Linux over the 2.7
cycle.

EDF (Earliest Deadline First)
http://citeseer.ist.psu.edu/13919.html

BLUE (an alternative to RED)
http://citeseer.ist.psu.edu/feng99blue.html

AF PHB (Assured Forwarding Per-Hop Behaviour)
http://citeseer.ist.psu.edu/552302.html

SFB (Stochastic Fair Blue)
http://citeseer.ist.psu.edu/551253.html

GREEN (a pro-active variant on the theme of RED)
http://citeseer.ist.psu.edu/feng02green.html

SMART (Scalable Multipath Aggregated RouTing)
http://citeseer.ist.psu.edu/vutukury00smart.html

CSFQ (Core Stateless Fair Queueing)
http://citeseer.ist.psu.edu/391.html

StFQ (Start-Time Fair Queueing)
http://citeseer.ist.psu.edu/goyal96starttime.html

RFQ (Rainbow Fair Queueing)
http://citeseer.ist.psu.edu/cao00rainbow.html

PrFQ (Probabalistic Fair Queueing)
http://citeseer.ist.psu.edu/anker00prfq.html

ERR (Elastic Round Robin)
http://citeseer.ist.psu.edu/kanhere02fair.html

GFQ (Greedy Fair Queueing)
http://citeseer.ist.psu.edu/690207.html

PERR (Prioritized Elastic Round Robin)
http://citeseer.ist.psu.edu/681127.html

AOQ (Anchored Opportunity Queueing)
http://citeseer.ist.psu.edu/701742.html

BSFQ (Bin Sort Fair Queueing)
http://citeseer.ist.psu.edu/622188.html


As for the final question on what happens between
enqueue and dequeue, there are various diagrams out
there which show different aspects of how packets
traverse the system. I've included a few links to
those I could find:

http://open-source.arkoon.net/kernel/kernel_net.png
http://ebtables.sourceforge.net/br_fw_ia/PacketFlow.png
http://ebtables.sourceforge.net/br_fw_ia/br_fw_ia.html
http://www.docum.org/docum.org/kptd/

The last of these is the infamous Kernel Packet
Travelling Diagram. Most links to this that I've been
able to find are broken. It should be noted that the
diagrams all refer to the Linux 2.4 kernel. Linux 2.6
has quite a few QoS changes to it, but I'm unclear as
to whether they significantly alter any of the flows.

I hope this is of some use. Or, at the very least, is
an effective remedy to insomnia. :)

Jonathan

--- Zhenyu Wu <[EMAIL PROTECTED]> wrote:

> Hello,
> 
> Normally, in addition to such qdisc scheduler
> mechanisms as FIFO, PQ, WRR, WFQ,
> are there any more? Then, there is a confusion on
> scheduler in Linux enviroment:
> Assume there is a qdisc, such as RED as a leaf qdisc
> in a router, we know, if
> there is packet which want to enqueue the packet,
> the Function red_enqueue is
> called, but when the packet leave the queue(when the
> Function red_dequeue is
> called)? I think it is meaningless when the pack
> leaves the queue just it enterred
> it. Is there anything need to be done betweent the
> packet's enqueue and dequeue?
> 
> Best,
> 
> 
> ___
> LARTC mailing list / LARTC@mailman.ds9a.nl
> http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO:
> http://lartc.org/
> 




__ 
Do you Yahoo!? 
Jazz up your holiday email with

Re: [LARTC] QoS with Artifficial Intelligence

2004-12-20 Thread Jonathan Day
I'm guessing the "AI" bit is a simplified way of
expressing what they're after. "AI", per se, is
meaningless, because it's undefined.

What I -think- they want to do is examine the current
behaviour of the traffic, anticipate how it is going
to behave next, set the QoS to match that expectation,
and then "learn" both from what actually happens and
from the quality of traffic flow produced.

A self-adjusting QoS is a tough problem, and I'm not
aware of anyone who has done much research into such
things. One of the problems is that traffic flow is
random, rather than periodic or constant. There is no
obvious way, at the start, to tell if a given transfer
is going to be large or small. Also, you can't just
pick a certain set of variables to change, because the
values are highly interdependent. You've got to change
them all, and that makes the problem much more
complex.

A much better approach would be to look at QoS over
the network, rather than at a single point. This is
because optimising a single point can make some
subsequent point perform worse. What you want is to
optimise the system in totality.

On a relatively small network, this is relatively
easy. Just have all the routers periodically transmit
their current settings, the statistics per interface
per traffic class (you don't care about the source for
this), the router load and the estimated latency &
packet loss. This data goes to a central server, which
determines the settings likely to work best in the
future.

Because we're dealing with relatively large pools of
aggregated random data, we can apply statistical
techniques. I'd start off with looking at queueing
theory, which deals with the forming and processing of
a linear series of random events which need
processing. This should be able to tell you how large
a bucket you want for each class. (ie: the hard
limit.)

Calculating the optimal number of classes is harder,
but you do know that the sum of the upper soft limits
for all classes must be equal to or less than the
capacity of the router. To me, that suggests you might
be able to get a good guess for the soft limits via
the SIMPLEX method (also known as Operational
Reseaarch).

Once you know the soft and hard limits, you can
determine the number of classes by queueing theory -
it is the minimum number of "queues" into which you
need to split the traffic to get maximum throughput,
avoiding "empty" queues.

This approach would not work for a single router. The
traffic is "random", but it is not random enough, and
statistics doesn't work well on single points. It will
also fail on very large networks, because the overhead
of transmitting the metadata would become too large,
and by the time the data was processed, the results
would no longer have much meaning.

For very large networks, you could "escape" the
problem by regarding it as a large collection of
overlapping medium-sized networks. You could then
process each of the medium-sized networks using the
above method. Where two (or more) manager nodes
instruct a specifc router, the router would take the
average recommended values. (If you know in advance
that one of the managers is more relevent than
another, then simply weight the average accordingly.)

***WARNING***

All of the above is speculative, in the sense that it
-should- work, but I don't have a large enough test
network to verify it. Nor do I know the optimum number
of routers/hosts where the numbers are statistically
meaningful, yet where the metadata doesn't interfere
with the traffic flow AND where the results can be
passed back and acted on within the timeframe for
which they are valid.

I say the above -should- work, because there are
methods for solving the various parts of the
problem-space. If you combine these methods correctly,
you'll end up with a solution to the whole problem
space.

Now comes crunch #1. Although traffic flow is random,
in aggregate, it is not necessarily random when split
into classes. Certain events (eg: backups over a
network, connecting to a DHCP server on power-up, etc)
are mostly going to occur at specific times. You could
always complicate the manager nodes, by adding a diary
of known large-scale events, so that it can statically
allocate the correct bandwidth for those and then
dynamically allocate whatever is left.

Crunch #2. Statistical methods, herustics, etc, are
generally slow. Changes in network behaviour can be
fast. To be meaningful, the results have to be
calculated and passed back to the routers so they can
update their QoS methods before the traffic has
changed significantly.

Crunch #3. Probably the biggest problem of all.
Transmitting the metadata and then getting the updated
QoS information is going to take up bandwidth. This is
going to alter the flow. If you're lucky, the change
will be short-lived. If you're unlucky, the knock-on
effects (eg: resent packets, changes in
load-balancing, etc) will disturb the pattern
significantly and unpredictably, making the new QoS
parameters u

[LARTC] Documentation query

2004-04-11 Thread Jonathan Day
Hi,

The documentation in CVS seems to be getting a little
stale. Is this because of a lack of contributors
(unlikely, given the very healthy mailing list) or
lack of time by maintainers?

If the former, then I'll work on writing something up,
because I'd like to see the knowledge out there
preserved.

If the latter, is there anything list members like
myself can do?


__
Do you Yahoo!?
Yahoo! Tax Center - File online by April 15th
http://taxes.yahoo.com/filing.html
___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


Re: [LARTC] How to route internal addresses?

2004-03-21 Thread Jonathan Day
Although the original question was retracted, this
does bring up an ionteresting point - how to make
Linux treat certain IP ranges on certain devices as
though it were a hub or switch, rather than a router,
and/or route over the remainder.

This is a much more specific and complex problem than
merely copying traffic over via a bridge, as it means
you can have the following scenario:

Subnets A, B, C come in on card X
Subnets D, E come in on card Y
Subnet F comes in on card Z

A goes to Y as per a hub, but routes to Z
B goes to Y, Z as a switch
The remainder are routed over all

Arguably, hub/switch dynamics is beyond the scope of a
routing list, but nobody can argue that this isn't an
advanced problem in networking.


--- Serban Murariu <[EMAIL PROTECTED]> wrote:
> Hey,
> i have the following problem/question:
> how can i route internal addresses?
> my setup is sumthing like 


__
Do you Yahoo!?
Yahoo! Finance Tax Center - File online. File on time.
http://taxes.yahoo.com/filing.html
___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/


[LARTC] Non-traditional Failover Query

2004-02-23 Thread Jonathan Day
Hi,

Partly because I never like straightforward solutions,
I am looking to implement a non-standard failover
system that owes its origins to mixing RAID 5 with
some beer.

The idea is to have machines A, B and C, configured as
follows:

1) Any given process is running on TWO machines at the
same time. If a process or machine fails, then a new
backup is started on the third machine. There is thus
a rotation around the nodes.

2) Packets destined for any of the machines should be
received by ALL of the machines.

3) If the primary process is on A, then replies from B
and C should still be generated, but be transparently
dropped. Likewise, if the primary process is on B or
C.

Let's say you have an Apache process running on A and
B. B is shadowing A on everything. It's at the same
point, has the same connections established, etc.
Failover becomes merely "ungagging" B.

How is this a LARTC problem?

Uhhh... because this approach is seriously abusing the
entire networking stack. We essentially have all three
machines running with an identical MAC and IP address
visible to the outside. The "distinct" address is
purely internal.

What this requires is a way of tricking all three
machines into believing that they are the sole
recipients. This keeps the stacks in a uniform state,
which means we can fail-over the connections without
having to either checkpoint or copy stateful
information, both of which get ugly when you start
talking about lots of information.

This leads to the second network-related problem. If
you have two identical machines starting from
identical states, and processing identical streams,
then they should end up in identical states - ie:
crashed.

This is easily fixed. If A is the machine you are
starting all the processes on, and B is your "mirror",
then C needs to take up the excess load.

In other words, A+C is a cluster, and B+C is a second
cluster. Processes migrated to C from A or B aren't
mirrored. (This is akin to RAID 5's partial backup.)

So, the three machines need to be seen from four
distinct views:

A) From outside, a single machine is visible.

B) From the HA perspective, there is one primary
machine, one mirror machine and one spare

C) From the load-balance perspective, there are two
overlapping clusters.

D) From the LAN perspective, there are three distinct,
uniquely-addressable machines.

Using Linux' advanced networking, BPF and netfilter
layers, connection mirroring for HA/load-balancing
purposes should be straight-forward. Much more so than
using the "wedge" concept proposed by MIT. Because you
need send nothing more than flags between the
machines, it should also be less expensive on the LAN
and processor.

So... how would you go about doing this?


__
Do you Yahoo!?
Yahoo! Mail SpamGuard - Read only the mail you want.
http://antispam.yahoo.com/tools
___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/