--- Mike Mestnik <[EMAIL PROTECTED]> wrote:
> Date: Fri, 25 Jun 2004 09:51:21 -0700 (PDT)
> From: Mike Mestnik <[EMAIL PROTECTED]>
> Subject: Re: IPP2P: Simular project l7-filter.
> To: Eicke Friedrich <[EMAIL PROTECTED]>
>
> --- Eicke Friedrich <[EMAIL PROTECTED]> wrote:
> > Mike Mestnik wrote:
>
Sorry for all the questions I'm just trying to understand how it all works. Thank
you for your time.
Is the following configuration possible? where ht 800 links to 3: for the 3rd Octet
in the IP and then 3: links to 4: for the fourth Octet?
tc filter add dev eth0 parent 1:0 prio 10 handle 3: pro
I was doing a test on adding a lot of leafs and filters, I create 100,000 filters
with about 25,000 leafs, it has taken over 30minutes to add all the rules. Does any
one have any info on the Maxium or so number or rules a box should have to remain
operational and also able to install the rules?
Hi Devik, I played with your htbfair patch on 2.6.6 and found some
diferences between 2.4 to 2.6 that cause problems when applying it.
Diferences include rb_node that was rb_node_t and some other minor probs.
After "fixing" those diff troubles I still get the following error
compiling the kerne
one more I missed is from the documentation.
# tc filter add dev eth1 protocol ip parent 1:0 prio 5 u32 ht 800:: \
match ip src 1.2.0.0/16 \
hashkey mask 0x00ff at 12 \
link 2:
Ok, some numbers need explaining. The default hash table is called 800:: and all
filtering
tc filter add dev eth0 parent 1:0 prio 10 handle 3: protocol ip u32 divisor 256
tc filter add dev eth0 protocol ip parent 1:0 prio 10 u32 ht 800:: match ip dst
10.2.0.0/16 hashkey mask 0xff00 at 16 link 3:
tc filter add dev eth1 parent 1:0 prio 10 handle 3: protocol ip u32 divisor 256
tc fil
--- Greg Stark <[EMAIL PROTECTED]> wrote:
>
> Ed Wildgoose <[EMAIL PROTECTED]> writes:
>
> > You need something which works at IP level or above. TCP (level
> higher) has
> > some stuff, but (I repeat) it basically involves dropping traffic
> until the
> > sender slows down. There are protocols
On Friday 25 June 2004 06:14, Ross Skaliotis wrote:
> I'm trying to fill a token bucket with enough tokens to burst several gigs
>
> of data. However, it doesn't seem to get any higher than ~3.9GB:
> >tc qdisc add dev eth0 root tbf rate 1440kbit latency 50ms \
>
> burst 160
>
> >t
I have compiled IMQ as a module (with NAT patch).
I have also compiled ip_queue as a module.
Problem is that when imq module is loaded, you can not load the ip_queue
module and v.v.
I'm not sure which "ip_queue" you mean, but on my 2.6 wolk kernel I have
IMQ compiled and ip_nf_queue (userspace
I have a question about a few things:
ok when you have parent x:x and handle x:x link x:x flowid and so on. What is the
max values of each of these, also where these or some of these hex numbers? I tried
this on redhat 7.3 so there may have been some updates, but this is one of the tests
I did.
[
Is there a relation between the hashtable ID /(parent,handle) so that if I used 2:
for a hash table I could or couldn't use 2: for a (parent,handle)ID?
No relation. You can use the same id for both.
I also noticed that you type the hashtables like 2:2: can you have more levels with
this? like 2:
Hi,
Thanks for the response, Ed.
It seems I have not been clear enough. Forget about frottle, currently the
problem is much simpler. I have two NICs in a bridge (which is router's
LAN interface) and another NIC which is the WAN. The upstream can be
easily controlled with an egress qdisc set on WAN
12 matches
Mail list logo