Re: [LARTC] how to change classful netem loss probability?

2006-04-27 Thread George P Nychis
And if its not possible to change the probability, is there another method I 
can use instead?

> Hi,
> 
> I am using netem to add loss and then adding another qdisc within netem 
> according to the wiki.  Then i want to change the netem drop probability 
> without having to delete the qdisc and recreate it.  I try it but I get 
> invalid argument:
> 
> thorium-ini hedpe # tc qdisc add dev ath0 root handle 1:0 netem drop 1% 
> thorium-ini hedpe # tc qdisc add dev ath0 parent 1:1 handle 10: xcp 
> capacity 54Mbit limit 500 thorium-ini hedpe # tc -s qdisc ls dev ath0 qdisc
> netem 1: limit 1000 loss 1% Sent 0 bytes 0 pkts (dropped 0, overlimits 0) 
> qdisc xcp 10: parent 1:1 capacity 52734Kbit limit 500p Sent 0 bytes 0 pkts
> (dropped 0, overlimits 0) thorium-ini hedpe # tc qdisc change dev ath0
> root handle 1:0 netem drop 1% RTNETLINK answers: Invalid argument 
> thorium-ini hedpe # tc qdisc change dev ath0 root netem drop 1% RTNETLINK
> answers: Invalid argument
> 
> any ideas?
> 
> Thanks! George ___ LARTC mailing
> list LARTC@mailman.ds9a.nl 
> http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
> 
> 


-- 

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] MULTIPATH: how to control chache expiration time?

2006-04-27 Thread Luciano Ruete
I have a 2.6.12(ubuntu-patchset), kernel recompiled with this routing options:
   [*]   IP: advanced router
   [*] IP: policy routing
   [*] IP: equal cost multipath


Load balancing is working great, but i have problems whits long term tcp flows 
(like msn-messenger or vpns or any other type of long term ip based 
conection). 

I assume this is because after a period of time, the per-host route cache 
expires and packets get re-routed, sometimes unfortunley, from a diferent 
iface. It is to remark that i'm not doing NAT in this box, just routing, the 
nat is done in each of the nexthops listed(so, no julian's patches applied).

i've found[1] that:  
/proc/sys/net/ipv4/route/secret_interval
"instructs the kernel how often to blow away ALL route hash entries regardless 
of how new/old they are"

- Put the secret_interval to 1 day, will solve my problem?, cause i think that 
neither a day is enough (i have ssh sessions open for more than that) 
- There are other values i have to have in consideration?(route tables 
cache/hash size/mem)
-Do someone knows a better aprouch?


Another thing(besides the previous problem) is that if i compile the kernel 
whit (CONFIG_IP_ROUTE_MULTIPATH_CACHED) enabled:
[*]  IP: equal cost multipath with caching support (EXPERIMENTAL)

The multipath sotps working and all packets get routed to the las iface in the 
nexthops statements. I try compiling the four multipath modules/algos an 
modprobing its, but same result. Because of that i have to go back to equal 
cost multipath whit CONFIG_IP_ROUTE_MULTIPATH_CACHED disabled.
If someone can give me a hint on this will be nice to, because some thing 
keeps etching. (sorry if this is not pure english)


[1]http://lwn.net/Articles/145406/


Just in case some commands output:

[EMAIL PROTECTED]:/backup/ftp# ip ro ls table adsl
192.168.10.37 via 192.168.90.3 dev eth2
192.168.100.0/24 dev eth1  proto kernel  scope link  src 192.168.100.1
192.168.50.0/24 dev eth2  proto kernel  scope link  src 192.168.50.1
192.168.3.0/24 dev eth6  proto kernel  scope link  src 192.168.3.2
192.168.2.0/24 dev eth5  proto kernel  scope link  src 192.168.2.2
192.168.1.0/24 dev eth4  proto kernel  scope link  src 192.168.1.2
192.168.90.0/24 dev eth2  proto kernel  scope link  src 192.168.90.1
default  proto static
nexthop via 192.168.1.1  dev eth4 weight 1
nexthop via 192.168.2.1  dev eth5 weight 1
nexthop via 192.168.3.1  dev eth6 weight 1
[EMAIL PROTECTED]:/backup/ftp# ip ro show cache | egrep 'eth4|eth5|eth6' -B1 | 
tail 
-n20
201.216.128.100 from 192.168.90.5 via 192.168.3.1 dev eth6  src 192.168.90.1
--
192.168.90.5 from 201.240.149.1 dev eth2  src 192.168.1.2
cache  mtu 1500 advmss 1460 hoplimit 64 iif eth5
--
cache   mtu 1500 advmss 1460 hoplimit 64 iif eth2
200.114.138.45 from 192.168.90.5 via 192.168.1.1 dev eth4  src 192.168.90.1
--
192.168.90.5 from 200.74.39.52 dev eth2  src 192.168.1.2
cache  mtu 1500 advmss 1460 hoplimit 64 iif eth5
71.80.214.141 from 192.168.90.5 via 192.168.1.1 dev eth4  src 192.168.90.1
--
cache   mtu 1500 advmss 1460 hoplimit 64 iif eth2
24.86.57.13 from 192.168.90.5 via 192.168.1.1 dev eth4  src 192.168.90.1
--
192.168.90.5 from 69.66.58.31 dev eth2  src 192.168.1.2
cache  mtu 1500 advmss 1460 hoplimit 64 iif eth5
--
192.168.90.5 from 61.228.9.180 dev eth2  src 192.168.1.2
cache  mtu 1500 advmss 1460 hoplimit 64 iif eth4

[EMAIL PROTECTED]:/backup/ftp# grep ROUTE /boot/config-2.6.12-luciano.1
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_ROUTE_FWMARK=y
CONFIG_IP_ROUTE_MULTIPATH=y
# CONFIG_IP_ROUTE_MULTIPATH_CACHED is not set
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_MROUTE=y
CONFIG_BRIDGE_EBT_BROUTE=m
# CONFIG_DECNET_ROUTER is not set
CONFIG_WAN_ROUTER=m
CONFIG_NET_CLS_ROUTE4=m
CONFIG_NET_CLS_ROUTE=y
CONFIG_WAN_ROUTER_DRIVERS=y
[EMAIL PROTECTED]:/backup/ftp#

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] A doubt

2006-04-27 Thread Juan Felipe Botero
HII am working in a project and i needed to know if the combination HTB+SFQ, works well..So i started to do a test with tcng.I put during 20 minutes 4 types of traffic between 2 computers, a http file transfer, a ftp file transfer, a tftp file transer and a ssh interactive transfer.
In all the protocols with file transfer (FTP, HTTP, TFTP) i obtained good results, the percentage of bandwidth was looked like previously formed, but in the ssh interactive transfer, the real percentage of bandwidth was extremely superior to the formed percentage.
Do some of you know if is the interactive traffic well supported with this queueing disciplines?-- Juan Felipe Botero Ingeniero de sistemasUniversidad de Antioquia
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


RES: RES: [LARTC] Backlog with less rate than defined

2006-04-27 Thread Luciano
Hi Andy,

I changed the configuration with no default on htb, sending unmatched ip
packets to a limited queue. It´s now working fine.

Thanks a lot.

Luciano

-Mensagem original-
De: Andy Furniss [mailto:[EMAIL PROTECTED] 
Enviada em: terça-feira, 25 de abril de 2006 20:16
Para: Luciano
Cc: lartc@mailman.ds9a.nl; [EMAIL PROTECTED]
Assunto: Re: RES: [LARTC] Backlog with less rate than defined

Luciano wrote:
> Hi Andy,
> 
> I´m not sure if I understood what you told about arp packets.
> I use htb default but the problem occurs even when the default queue
> rate is low (it is almost always low in rate and pps).

It's still not ideal even if it's not the cause - sfq default length is 
128 packets so if they were mtu size when it's full thats 1.5sec delay +

drops - and the stats show drops.

class htb 1:efff parent 1:1 leaf efff: prio 1 rate 1Mbit ceil 1Mbit 
burst 2909b cburst 2909b
  Sent 1113213839 bytes 9059857 pkts (dropped 61529, overlimits 0)
  rate 1130bps 13pps
  lended: 9059857 borrowed: 0 giants: 0

I would not use default on eth. Also 100mbit eth is not 100mbit at ip 
level, which is almost what htb sees (ip+14), so 1:1 needs to be less - 
but if children don't add up to that then it won't hurt.

You could just send all unmatched ip to 1:efff with a low prio filter -

tc filter add dev eth0 protocol ip parent 1:0 prio 99 u32 match u32 0 0 
flowid 1:efff

then arp will not get shaped.

I notice you use handle on filters maybe OK but I usually only see it 
with hashing or fw.

> 
> The attached files are:
> Rc.local - criation of the basic queues including default
> Regras.inc - criation of each queue when the user login
> Queues - statistics of the basic queues

Have you measured the rate another way?

Andy.

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


[LARTC] Is there ping2?

2006-04-27 Thread Darko
Hi,

I would like to set up access to Internet via two providers. When they both 
works OK I use 
ip route add default scope global nexthop via x.x.x.x dev eth0 weight 1 
nexthop via y.y.y.y.y dev eth1 weight 1
Next, I use script that regularly pings their upper providers for see if some 
of providers are down. If one is down I want to guide all communications via 
other interface. If second one is down -  
ip route del default
ip route add default via x.x.x.x
It works OK but at that moment I can't ping any nonlocal address thru eth1 so 
I can't check when provider come up.
ping z.z.z.z -I eth1   - gives nothing
Tcpdump shows that eth1 broadcast arp request for nonlocal address ?!?
Tables for both interfaces are setup correctly and it seems that ping checks 
only main table. 
At same time when haven't default route via provider and that provider is up, 
all communication initialized by peers on Internet going on nondefault 
interface, even pings, works OK.
When the default routes manually sets back to go thru both ones it comes back 
to normal .
One more strange thing is that my script works OK for over one year until I 
have to change netmask and default gw for one provider.

TIA, Darko
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Unsubscribe

2006-04-27 Thread Jody Shumaker
At the bottom of every single e-mail on this list are directions on
how to correctly unsubscribe.  Could you please not make a fool of
yourself (twice) and actually read them?

- Jody
___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc