Am 20.02.2016 um 12:40 schrieb Martin Waschbüsch:
> 
>> Am 20.02.2016 um 08:25 schrieb Alexandre DERUMIER <aderum...@odiso.com>:
>>
>>>> Some articles, for instance 
>>>> https://www.kernel.org/doc/ols/2009/ols2009-pages-169-184.pdf, 
>>>> explicitly recommend disabling irqbalance when 10GbE is involved. 
>>>>
>>>> Do you know if this is still true today? After all, the paper is from 
>>>> 2009. 
>>
>> Well, the article is about to disabling irqbalance AND manually binding cpus 
>> on network interfaces.
>>
>> Manual binding is better because you can fine tuning.
>>
>> But using irqbalance vs do nothing, irqbalance wins.
>>
>> I have seen a lot of system, using only cpu0 for network interrupts for 
>> example.
> 
> Ah, I see. In that case it would make sense indeed.
> The cards I use (both nic and sas/raid) employ one interrupt queue per core, 
> so there never was anything for me to tune.

nearly all current cards and drivers do that. That's the reason why you
don't need it with current HW and drivers. Not sure why mellanox isn't
doing that by default - see alexandre's post.

adaptec, lsi and intel have all queues per cpu / interrupt.

Stefan

> _______________________________________________
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
_______________________________________________
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to