From: [EMAIL PROTECTED] (Paul Hampson)
Wait, you're trying to send more data than the link can take? Then
No, of course I don't expect to send more than the limit.
send UDP, throttle it at the local end with a drop-oldest qdisc. Then
you get the effect of 'most recent data is best'.
From: Andreas Klauer [EMAIL PROTECTED]
Doesn't every QDisc work that way? When the kernel wants to send a packet,
it calls the appropriate dequeue() function in the QDisc. I'm not a kernel
developer so this guess might be wrong.
That's correct, but this operation takes a packet from an
I'm interested in all of
- opinions about why this is a good or bad idea
- pointers to similar proposals or products that already exist
- implementation suggestions
This is meant for real time applications that have small available
bandwidth and so they have to consider carefully what's the best
How can one copy packets to a monitoring interface?
For a start I'd like to know how to just copy all of those
that arrive on eth1 out to eth2 in addition to whatever else
would normally happen to them.
After that, a number of interesting possibilities:
- Copy only those with specified
INTERNET
|
|eth0 202=2E14=2E41=2E1
BW=2EManager
| |
| +eth1192=2E168=2E1=2E0/24
|
+--eth2192=2E168=2E2=2E0/24
Total incoming bandwidth to eth0 is 1024kbps
should be shared to eth1 and eth2, which mean each get 512Kbps and
burstable to 1024Kbps if other
#include linux/netfilter_ipv4/ipt_u32.h
#include linux/netfilter_ipv4/ip_tables.h
/* #include asm-i386/timex.h for timing */
MODULE_AUTHOR(Don Cohen [EMAIL PROTECTED]);
MODULE_DESCRIPTION(IP tables u32 matching module);
MODULE_LICENSE(GPL);
static int
match(const struct sk_buff *skb,
const
Abraham van der Merwe writes:
Hi Don!
I then tried fifos. With small packet fifos the packet loss is just
to great to be of any use and even then the latency is quite high (~200ms).
A small detail: what are small packet fifos? You mean fifos that
can only hold a small number of
lets say I want to limit traffic to/from client to 64kbit. now, client opens
a tcp connection blasting away at full speed.
If client now pings isp, it gets on average around 7 seconds latency. I
tried to improve this by using SFQ on the leaf nodes of my HTB hierarchy,
but that does
More info on this problem:
/proc/stat shows me that all of the [packet forwarding] work is
done by one cpu, the other does nothing.
/var/log/messages shows the following interesting data:
kernel: enabled ExtINT on CPU#0
kernel: masked ExtINT on CPU#1
Does anyone know that this means,
I'm testing to see how fast A can ping C without losing packets.
A -- B -- C
B is a dual processor (Intel(R) XEON(TM) CPU 1.80GHz) machine.
/proc/stat shows me that all of the work is done by one cpu, the other
does nothing.
Does anyone have any ideas of why this should be the case and what
I
Lets say we want to limit a customer usage to 256kbit total.
That is, you want to limit upload+download.
Whether or not it can be done, I think it's worth pointing out that
this is nonsense. It makes sense to allocate A+B only if A and B can
be used to replace each other. Upload and Download
I'd like to ask for some clarifications, if not quoting, in the tutorial
on page x321.html (not sure of section numbers) re: syn cookies.
I don't understand what the question is here.
Dan Bernstein (everyone's favorite mathematician :-) ) makes it very
I was not aware of that.
clear
Client --- R1 --- R2 --- R3 --- Web
the Client it's me, the R1 router it's myne (so i can control it), the
R2 is my provider router, and R3 is the provider,provider router.
R2 - R3 is a 2mbit link
R1 - R2 is a 10mbit link
R2 have multiple interfaces and other 10mbit links
I have
I have a different proposal.
I think you should use ESFQ, always based on the internal IP address,
i.e., in the outbound direction base it on source, inbound use dest.
(This is to separately share upload and download bandwidth.)
That means that someone trying to use small bandwidth will get it
Anyone else seen this one?
ping -n {remote host}
{ 6 second delay }
64 bytes from 216.168.105.33: icmp_seq=0 ttl=255 time=6sec
64 bytes from 216.168.105.33: icmp_seq=1 ttl=255 time=5sec
64 bytes from 216.168.105.33: icmp_seq=2 ttl=255 time=4sec
64 bytes from 216.168.105.33:
Simon Matthews writes:
OK, this may be a reasonable approach, but how do I force it initiate
connections from the fast interface, yet allow it to fail over to the
slow interface if the sytem removes the route to the fast gateway because
it has detected that it is not responding?
Off
From: CIT/Paul [EMAIL PROTECTED]
Any help would be greatly appreciated :) This is much better than SFQ :
Sounds like SFQ to me. Can you tell us what the differences are?
___
LARTC mailing list / [EMAIL PROTECTED]
Paul writes:
No SFQ is not like WFQ... WRR is the closest thing to cisco's
fair-queue..
WRR keeps track of the connections using the ip_conntrack .. that's sort of
what
cisco's fair-queue does and it checks the bandwidth streams and gives lower
priority
to the higher streams and
Date: Mon, 24 Jun 2002 16:33:32 +0200 (CEST)
From: M.F. PSIkappa [EMAIL PROTECTED]
Subject: [LARTC] Gigabit Etnernet router
Hi,
I would like to build new router with 3 Gigabit Ethernet card. Need I
dual procesor system or not ? I would like to have trafic controling (htb
or
Patrick McHardy writes:
We're adding an htb as the qdisc for a child class of htb ? Why?
Isn't that just wasting time? Can't all 10: stuff be done with 1:
instead?
The root qdisc is used for delay simulation, 10:0 is the real qdisc
(
in the output of ip addr:
2: eth0: ... qdisc htb qlen 100
Does qlen 100 have anything to do with htb?
___
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Alexander Atanasov writes:
SFQ classify connections with ports, esfq classify them and just by IP so
we can have flows:
SRC IP+proto+DST IP+PORT - one flow
just DST IP - one flow
another we can think of - one flow
So i think just about packets of size bytes but without
From: Martin Devera [EMAIL PROTECTED]
Subject: [LARTC] (E)SFQ suggestion
Hi,
just simple note. Maybe it is already in progress :)
There are attempts to replace hashing routine in SFQ to
consider IPs or ports.
What about to use HRR - roundrobin around bunch of IP
adresses and
... What if SFQ were to start with a minimal number of buckets, and
track how 'deep' each bucket was, then go to a larger number of bits
(2/4 at a time?) if the buckets hit a certain depth? Theoretically,
this would mean that 'fairness' would be achieved more often in current
Alexander Atanasov writes:
At first look - i think i've have to incorporate my changes into
your work. I've not done much just added hashes and unlocked what
Alexey Kuznetsov did.
Not quite that simple. You have to throw out about half of my file,
mostly the last third or so, which
Patrick McHardy writes:
I don't think dropping at dequeue is necessary.
Here's an example showing it is.
I have an SFQ with max queue size 128 and very low rate, say about
1 packet/sec that I use to limit the rate of SYN's.
Now as part of a test I send a syn flood, say 200 packets in one
Martin Devera writes:
Hi,
only a few notes on the theme. You are right with the displacement
and bad enqueue byte counters. Maybe it would be better to cound
packets at dequeue time only in clasfull qdisc. It also makes better
sense because qdosc can also instruct SFQ to drop packet -
The rp_filter is also explained here:
http://lartc.org/HOWTO//cvs/2.4routing/html/c1182.html#AEN1188
above says:
for i in /proc/sys/net/ipv4/conf/*/rp_filter ; do
echo 1 $i
done
First question:
ls /proc/sys/net/ipv4/conf/*/rp_filter
=
/proc/sys/net/ipv4/conf/all/rp_filter
I have been digging through the Lartc documentation as well as Netfilter,
etc. and haven't found much on per-connection routing for multiple
uplinks/providers.
What I would like to do is cleanly move packets out to the Internet over
two (maybe 3) separate interfaces, utilizing all of
Bob Gustafson writes:
But, But, - this is really just software. We are not trying to cram wine
bottles down the internet pipe (although many would really like to do
that!).
The limitations I point out are inherent in tcp/ip. I think I sent a
proposal to this list describing a
30 matches
Mail list logo