>
>
> > >
> > I encountered a similar problem, and we got a significant performance
> boost
> > after limiting the number of queues from 64 to 8. (The sever is equipped
> > with 80 cores.)
> >
> > I'm very curious about what happened. More queues can amortize the work
> > load and help programs use more CPU cores and cache. But why limiting the
> > number of queues get better performance? Can one of you shed light on
> this?
> >
> > Thanks.
> >
>
> More queues means the hardware has to poll more rings and increases
> the PCI bus bandwidth.
>
Hi Stephen,
Thanks for the response.
But what do you mean by 'the hardware'? Did you mean CPU cores? We have did
the experiments without NAPI but got the same result. What do you mean by
'poll'? I have no idea which components in my system will poll NIC rings.
Could you please elaborate this more?
Thanks.
--
--Junchang
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit
http://communities.intel.com/community/wired