Re: [E1000-devel] e1000 rx_ring[0] protection

2009-07-01 Thread Brandeburg, Jesse
On Mon, 8 Jun 2009, Lal wrote:
  I am using 2.6.21 kernel and CONFIG_E1000_NAPI is defined. It's a
  multi-core system.
 
  In e1000_intr the interface, over which packet is received, is queued
  in poll_list (per cpu based).
 
  this is the key.
 
  Later net_rx_action invokes dev-poll which invokes e1000_clean
  function. e1000_clean invokes e1000_clean_rx_irq in turn. Although,
  this function call is made on all cpu, rx_ring is common data
  structure, but is not protected.
 
  Is rx_ring per cpu based or common to all cpus ?
 
  The OS guarantees that we will never have two poll events running
  simultaneously.
 
 
 Thanks Jesse, this answers my question.
 Having said this, can I conclude that on a muti-core or SMP system;
 from a given interface at a time only one core/cpu will be processing
 a packet and remaining waiting for netpoll lock (for the given
 interface)?

Yes, for interfaces that do not use RSS to spread flows out to multiple RX 
queues, and have MSI-X

 If yes, this is under utilization of cores. How to overcome this?
 I am facing a situation where one core usages goes 100% while rest remain 
 idle.

Some patches from Tom Herbert at google recently went to netdev for review 
and basically do exactly what you request, fan out flows to multiple CPUs 
for adapters that do not have RSS+MSI-X


--
___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel


Re: [E1000-devel] e1000 rx_ring[0] protection

2009-06-03 Thread Lal
On Mon, Jun 1, 2009 at 9:21 PM, Brandeburg, Jesse
jesse.brandeb...@intel.com wrote:
 Peter, you're correct, however the tx queue interface is protected by locks 
 (qdisc lock, netdev lock) in the stack.  And newer kernel versions of e1000 
 don't even have the tx_ring lock any more (in the driver).

 On the receive side the napi polling is per cpu and protects the driver from 
 re-entrancy using the napi struct.

 Jesse


I am using 2.6.21 kernel and CONFIG_E1000_NAPI is defined. It's a
multi-core system.

In e1000_intr the interface, over which packet is received, is queued
in poll_list (per cpu based).
Later net_rx_action invokes dev-poll which invokes e1000_clean
function. e1000_clean invokes e1000_clean_rx_irq in turn. Although,
this function call is made on all cpu, rx_ring is common data
structure, but is not protected.

Is rx_ring per cpu based or common to all cpus ?

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel


Re: [E1000-devel] e1000 rx_ring[0] protection

2009-06-03 Thread Brandeburg, Jesse


On Wed, 3 Jun 2009, Lal wrote:

 On Mon, Jun 1, 2009 at 9:21 PM, Brandeburg, Jesse
 jesse.brandeb...@intel.com wrote:
  Peter, you're correct, however the tx queue interface is protected by 
  locks (qdisc lock, netdev lock) in the stack.  And newer kernel 
  versions of e1000 don't even have the tx_ring lock any more (in the 
  driver).
 
  On the receive side the napi polling is per cpu and protects the 
  driver from re-entrancy using the napi struct.
 
  Jesse
 
 
 I am using 2.6.21 kernel and CONFIG_E1000_NAPI is defined. It's a
 multi-core system.
 
 In e1000_intr the interface, over which packet is received, is queued
 in poll_list (per cpu based).

this is the key.

 Later net_rx_action invokes dev-poll which invokes e1000_clean
 function. e1000_clean invokes e1000_clean_rx_irq in turn. Although,
 this function call is made on all cpu, rx_ring is common data
 structure, but is not protected.
 
 Is rx_ring per cpu based or common to all cpus ?

The OS guarantees that we will never have two poll events running 
simultaneously.

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel


Re: [E1000-devel] e1000 rx_ring[0] protection

2009-05-31 Thread Peter Teoh
interesting question/observation...never noticed this...just making a guess:

when sending out.multiple CPUs may simultaneously want to send out
together...
but when coming back.per network card.all packet is already
serialized by the hardwareso since this driver is per-piece of
hardwareshould not be a problem?

On Fri, May 29, 2009 at 6:49 PM, Lal learner.ker...@gmail.com wrote:
 In e1000_clean function in drivers/net/e1000_main.c file, tx_ring[0]
 is protected by spin lock to prevent from being cleaned by multiple
 cpus simultaneously, but rx_ring[0] is not.

 Why rx_ring[0] is not protected from multiple cpus ?

 Thanks
 -Lal

 --
 To unsubscribe from this list: send an email with
 unsubscribe kernelnewbies to ecar...@nl.linux.org
 Please read the FAQ at http://kernelnewbies.org/FAQ





-- 
Regards,
Peter Teoh

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel


[E1000-devel] e1000 rx_ring[0] protection

2009-05-29 Thread Lal
In e1000_clean function in drivers/net/e1000_main.c file, tx_ring[0]
is protected by spin lock to prevent from being cleaned by multiple
cpus simultaneously, but rx_ring[0] is not.

Why rx_ring[0] is not protected from multiple cpus ?

Thanks
-Lal

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel