On Fri, 29 Jun 2012 08:44:52 +0800
Junchang Wang <[email protected]> wrote:

> >
> >
> > > >
> > > I encountered a similar problem, and we got a significant performance
> > boost
> > > after limiting the number of queues from 64 to 8. (The sever is equipped
> > > with 80 cores.)
> > >
> > > I'm very curious about what happened. More queues can amortize the work
> > > load and help programs use more CPU cores and cache. But why limiting the
> > > number of queues get better performance? Can one of you shed light on
> > this?
> > >
> > > Thanks.
> > >
> >
> > More queues means the hardware has to poll more rings and increases
> > the PCI bus bandwidth.
> >
> 
> Hi Stephen,
> 
> Thanks for the response.
> 
> But what do you mean by 'the hardware'? Did you mean CPU cores? We have did
> the experiments without NAPI but got the same result. What do you mean by
> 'poll'? I have no idea which components in my system will poll NIC rings.
> Could you please elaborate this more?
> 
> 
> Thanks.
> 

The PCI bus has two inherent limits, bandwidth (bytes/sec) and transactions per 
second.
With more queues you increase the number of transactions per second. Under load,
which is where it matters, NAPI will be in polling mode. And in that mode, the
busy queues will each have a CPU reading the status register.

Also, when there are more queues, the NIC receiver will be distributing the 
receive
ring updates to multiple locations. When only one ring is being updated there is
a greater chance tha multiple PCI writes can be combined (see WTHRESH).

That said, the real best answer depends on workload (how many CPU cycles 
required
per packet). The only way to determine that is by experimentation.

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to