LARTC£¬hello£¡
please help me. Thanks!
¡¡best regards!
Huang Xin Gang
[EMAIL PROTECTED]
¡¡2000-02-15
,S
hi all,
have anyone tried changing the tcp setting in the source files ? for eg
decreasing the TCP_KEEPALIVE_TIME from 2 hours to say 10 min --
(120*60*HZ) to (10*60*HZ) in include/net/tcp.h ??
changing the parameters in static unsigned long tcp_timeouts[] in
I trying to modify the files as it is writen into the
htb pach (htb2_2.4.17.diff)
then i tryied to recompile my kernel(2.4.17)
- make xconfig
make dep
make clean
make bzImage
i find an error message:
Id:cannot open sch_epd.o :aucun fichier ou repertoire
de ce type
In a long term always droping from the largest subqueue
gives you equal subqueues.
And, of course, one could have it drop them using a RED-like algorithm
to make the sessions stabilize themselves better.
what they have) is better. May be doing it like the cbq with average
packet size
Isn't it possible to arrange this with cbq and sfq leafs?
Yes ... By building a big CBQ/HTB tree with one leaf for each IP ... :)
I'm not sure how efficient / inefficient that is.
--
Michael T. Babcock
CTO, FibreSpeed Ltd.
___
LARTC mailing list /
Title: Message
Hello!
everyone,
For performing traffic shaping using HTB, is there a parameter to define
the queue or buffer length where packets get queued (once the allocated
bandwidth is being used up), instead of just getting dropped (policed).For
example, in TBF there is a parameter
- Original Message -
From: Michael T. Babcock [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, June 07, 2002 4:18 PM
Subject: RE: [LARTC] SFQ buckets/extensions
In a long term always droping from the largest subqueue
gives you equal subqueues.
And, of course, one could have
Hello Devik,
Do you have any information about?
Is there a tool does it on Linux?
Thanks for the answer,
Giovanni
-Messaggio originale-
Da: Martin Devera [mailto:[EMAIL PROTECTED]]
Inviato: giovedì 6 giugno 2002 20.06
A: Mancinelli Giovanni
Cc: '[EMAIL PROTECTED]'
Oggetto: Re: [LARTC]
Alexander Atanasov writes:
SFQ classify connections with ports, esfq classify them and just by IP so
we can have flows:
SRC IP+proto+DST IP+PORT - one flow
just DST IP - one flow
another we can think of - one flow
So i think just about packets of size bytes but without
Hi,
I'm running an experiment and I'm having some problems in my Core Router.
I'm using MGEN to generate the traffic. For some reason the core router
is not detecting the packets as I check the count of packet being sent using
tc -s qdisc. While in the other hand when I generate the traffic
Wow. It is very interesting. Did you tried to read counters
in iptables -vL and compare counts ? Like to read value from
/proc/net/dev compare to count of packets at INPUT chain and
then compare with no of packets in DROP chains.
It could give us better picture where are the packets going to.
What is the problem ? use cat ethloop_script|./ethloop
for example ..
devik
On Thu, 6 Jun 2002, King Yung Tong wrote:
Dear all,
I copy a Ethloop script for tc script, but no response???
Does anyone know how to use ethloop?
Ethloop;
#1st parameter -- time in milliseconds from program
Martin Devera wrote:
Wow. It is very interesting. Did you tried to read counters
in iptables -vL and compare counts ? Like to read value from
/proc/net/dev compare to count of packets at INPUT chain and
then compare with no of packets in DROP chains.
It could give us better picture where
No, i do not think like this. If we think just for TCP connections
we may end up with ideas like changig window size. TCP tunes to send this
way but it also tries to send more each 5 seconds, if it have data.
please not this. :-) Packetizer sw goes this way and it is
- not allowed by
that quite possible ... The only way to equalize bandwidth fairly in
these scenarios still seems to be to implement the hierarchial approach
of hashing against destination IP (the user receiving the packets) and
exactly. The discussion should be about how to implement if
efficiently. What
Thank you for you answer, 1:10 is incresed but 1:20 is also increased and
increase to the specific rate if I add 100kbps to every line.
In my case, I would like to put all the extra to 1:10 only, is that means
I have to give 1:11 ceil to 10kbps. If it is, is that means I don't need
prio
exactly. The discussion should be about how to implement if
efficiently. What about to have N IP buckets and N IP/PORT
buckets. When IP/PORT hash resolves to bucket then we could
(re)link the
Consider a new classful queueing discipline SFC that behaves exactly
as SFQ does and can have only
Thank you for you help.
Here is the HTB
echo Clean all the tc setup
./tc qdisc del dev lo root
./tc qdisc add dev lo root handle 1: htb default 12
./tc class add dev lo parent 1: classid 1:1 htb rate 100kbps ceil 100kbps
./tc class add dev lo parent 1:1 classid 1:2 htb rate 40kbps ceil 100kbps
For the sake of a play-by-play (and why it wouldn't quite work right
initially):
1) we need to dequeue a packet
2) we ask the 11 bit lower SFC for a packet.
3) it asks the upper SFC
4) the upper SFC takes the next bucket, based on IP, and gives us a
packet.
5) the lower SFC takes that packet and
Thank you, I put 100kbps to both 1:11 and 1:10. 1:10 (prio 0) get almost
50kbps form 60kbps excess and 1:20 (prio 1) get 10kbps form excess.
Is it the expected result?
I gussess the all 60kbps (excess) should go to prio 0 or by proportional
to rate in each class.
Pat
On Fri, 7 Jun 2002, Martin
ethloop output (as text)
On Fri, 7 Jun 2002, King Yung Tong wrote:
Thank you, I put 100kbps to both 1:11 and 1:10. 1:10 (prio 0) get almost
50kbps form 60kbps excess and 1:20 (prio 1) get 10kbps form excess.
Is it the expected result?
I gussess the all 60kbps (excess) should go to prio 0 or
Thank you, I put 100kbps to both 1:11 and 1:10. 1:10 (prio 0) get almost
50kbps form 60kbps excess and 1:20 (prio 1) get 10kbps form excess.
Is it the expected result?
I gussess the all 60kbps (excess) should go to prio 0 or by proportional
to rate in each class. The rate on the original class is
6.0 101170 92980 20 0 101108 9802 570 0 186 186 0 0
6.5 98639 89751 38 0 98624 13015 577 0 44 44 0 0
7.0 99815 93172 20 0 99812 9755 578 0 10 10 0 0
here it is exacly what you want
7.5 63232 65056 0 0 39204 23397 41 0 3 3 0 0
8.0 54142 54575 0 0 22722 18971 3 0 1 1 0 0
8.5 51258 51361 0 0
and what is your total real packet rate ?
Measured by bwm for example ?
i don't know, but it is not... hmmm
i am damned anyway, as there is no way to solve my problem. HTB is not going to help
me, all this linux's TC is not going to help me.. me? why me? there are many people
fighting with
24 matches
Mail list logo