Engress path / congestion detection and avoidance

2008-02-11 Thread Jeba Anandhan
Hi All, i have few doubts related to congestion and engress path. 1) What happens when congestion hits and how it affects engress path?. 2) How the engress traffic slow down when the congestion hits? 3) what are the variables updated when the congestion hits? 4) is it good to invoke netif_stop_que

Performance issue : GRE tunneling

2008-01-18 Thread Jeba Anandhan
Hi All, When i send the traffic outside of GRE tunnel, The speed is in 3-4Mbps. When i use the tunnel for traffic, the speed get reduced huge. It is around 100-300Kbps. What are the factors affects the performance when we use tunneling [ ex:GRE tunneling ]? Thanks Jeba -- To unsubscribe from this

doubt in e1000_io_write()

2008-01-11 Thread Jeba Anandhan
Hi all, i have doubt in e1000_io_write(). void e1000_io_write(struct e1000_hw *hw, unsigned long port, uint32_t value) { outl(value, port); } kernel version: 2.6.12.3 Even hw structure has not been used, why it has been passed into e1000_io_write function? Thanks Jeba -- To unsubscri

Improving performance of bonding driver (eql) using round robin alogrithm

2008-01-11 Thread Jeba Anandhan
Hi All, The existing algorithm works in eql bonding driver works based on priority of each slaves. The priority has been assigned as speed of the particular line. The current problem is, all the slaves didn't get the chance as best slave for the transmission. Will the round robin algorithm for se

Re: e1000 performance issue in 4 simultaneous links

2008-01-10 Thread Jeba Anandhan
Ben, I am facing the performance issue when we try to bond the multiple interfaces with virtual interface. It could be related to this thread. My questions are, *) When we use mulitple NICs, will the performance of overall system be summation of all individual lines XX bits/sec. ? *) What are th

Re: SMP code / network stack

2008-01-10 Thread Jeba Anandhan
Hi Eric, Thanks for the reply. I have one more doubt. For example, if we have 2 processor and 4 ethernet cards. Only CPU0 does all work through 8 cards. If we set the affinity to each ethernet card as CPU number, will it be efficient?. Will this be default behavior? # cat /proc/interrupts

TCP/IP stack / SMP kernel

2008-01-10 Thread Jeba Anandhan
Hi All, I am just wondering how TCP/IP stack runs in SMP kernel with multi processor environment?. will TCP/IP stack be on one processor or it is shared among the different processors? thanks Jeba -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to [EMA

SMP code / network stack

2008-01-10 Thread Jeba Anandhan
Hi All, If a server has multiple processors and N number of ethernet cards, is it possible to handle transmission by each processor separately? .In other words, each processor will be responsible for tx of few ethernet cards?. Example: Server has 4 processors and 8 ethernet cards. is it possibl

EQL / doubts

2008-01-10 Thread Jeba Anandhan
Hi All, I have few questions about EQL driver *) Why the tx_queue_len is set as 5?. For example if we bond 3 lines and each has 1000 as tx_queue_len, will the bonding line(eql) tx_queue_len be sum of these three tx_queue_len?. In this case, will the bonding line(eql)tx_queue_len be 3000? *)Quest