Thanks for the replies so far I will investigate further.

Greg, the machines are virtualized on VMWare. ESX 5. 4 cores each with 16
cores.

Dave, I will have to more carefully read your email. Thank you for assuming
there is a sound reason for us looking at this. There is'nt really I am
following the EMC guide provided by them. Its autogenerated documenation
and the consultant/SME does not know.

Regards

On 8 August 2012 16:08, Dave Costakos <david.costa...@gmail.com> wrote:

> https://access.redhat.com/knowledge/solutions/15482 also has some good
> (really good) information on this and some ways which are maybe easier to
> understand and use.
>
>
> On Wed, Aug 8, 2012 at 10:45 AM, Dave Costakos 
> <david.costa...@gmail.com>wrote:
>
>> I don't personally know of a simpler way to set IRQ affinity on Linux
>> than turning off irqbalance and updating the
>> /proc/irq/<IRQNUM>/smp_affinity file.  I'm also not at all sure how pinning
>> interrupts inside a virtual machine will really help you much unless you
>> are really, really, really (3 reallys) sensitive to VM latency for that
>> particular interrupt or you really, really, really want to keep your
>> application from being interrupted by the kernel scheduler.  If you are so
>> sensitive, then there are probably numerous other things you could benefit
>> from aside from IRQ pinning inside a VM.  (like interrupt coalescing
>> settings, using a physical machine instead of a virtual one, choosing the
>> 'right' virtualized NIC driver by doing latency testing, using SRIOV
>> network devices or other such items).
>>
>> Regardless of the motiviations which I'm sure are sound, here are what I
>> hope are useful notes.  It's been a while, but I think the masses will
>> correct any of my errors faster than they can answer the question :D.
>>
>> The smp_affinity file for an IRQ is a hex bit mask.  Each bit represents
>> a CPU, so you can create a mask that identifies 1 or more CPUs. A mask of
>> 'ffffffff' would represent all CPUs because all the bits would be 1.  You
>> can cat the file /proc/interrupts to see where interrupts have historically
>> fallen on a per-cpu basis since the machine has been up to ensure your
>> smp_affinity mask has worked.
>>
>> So, to set affinity on a 4 CPU system like you have, the most
>> understandable way for me has been to convert a binary number to a hex
>> number and echo that into your smp_affinity file for that IRQ.  Presuming
>> IRQ 93 and you wanted to allow interrupts to occur only on CPU 4, you might
>> try this:
>>
>> # echo -e "obase=16 \n ibase=2 \n 1000 \n" | bc
>> 8
>> # echo 08 > /proc/irq/93/smp_affinity
>>
>> Or this for CPU 0 and 2
>> # echo -e "obase=16 \n ibase=2 \n 101 \n" | bc
>> 5
>> # echo 05 > /proc/irq/93/smp_affinity
>>
>> You could of course script all of this which is probably required since a
>> reboot will forget all your interrupt pinning.
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Aug 7, 2012 at 10:57 AM, Greg Swift <g...@nytefyre.net> wrote:
>>
>>> On Tue, Aug 7, 2012 at 7:51 AM, Gerhardus Geldenhuis
>>> <gerhardus.geldenh...@gmail.com> wrote:
>>> > Hi
>>> >
>>> > I  am following an EMC Networker guide which is mentioning stopping
>>> > irqbalance as a service and setting affnity for all network interfaces
>>> that
>>> > that is faster than 1Gb.
>>> >
>>> > This is not something I have done before so learning rapidly.
>>> >
>>> > The logic behind it seems fine, and the Red Hat guide seems to suggest
>>> the
>>> > same:
>>> >
>>> http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/s-cpu-irq.html
>>> >
>>> > However this guide is for 6 and I am using 5.8 so I am reading it with
>>> a
>>> > pinch of 5.8 salt.
>>> >
>>> > I have a number of questions:
>>> > 1. The server in question is virtual. Logically it would seem that
>>> setting
>>> > CPU affinity on virtualized infrastructure would have the same effect
>>> as on
>>> > physical infrastructure. Valid assumption?
>>> >
>>> > 2. The server is provisioned with 4 cpu's each with 1 core. However cat
>>> > /proc/irq/83/smp_affinity which is my interrupt for eth0 shows
>>> > 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
>>> > (with irqbalance turned off )
>>> > I am not yet completely clear on how the masking word and have yet to
>>> find a
>>> > cleIar explanation. This value is double the amount of cores I actually
>>> > have. This is a bit confusing so any explanation would be appreciated.
>>> >
>>> > 3. Lastly how do I set affinity, although understanding q2 would
>>> answer q3?
>>> > I am happy to reread any documentation. The man 5 proc was not detailed
>>> > enough and I found the above mentioned Red Hat guide also lacking in
>>> > clarity.
>>> >
>>> > Disclaimer: Still suffering from the effects of a server bout of man
>>> flu so
>>> > any lack of clarity might be entirely down to me....
>>>
>>> I haven't actually done what you are talking about, however I'm
>>> curious if there might be a simpler way.
>>>
>>> The host machine for that virtual is which hypervisor? How many CPUs
>>> (real vs HT etc)?
>>>
>>> -greg
>>>
>>> _______________________________________________
>>> rhelv5-list mailing list
>>> rhelv5-list@redhat.com
>>> https://www.redhat.com/mailman/listinfo/rhelv5-list
>>>
>>
>>
>>
>> --
>> Dave Costakos
>> mailto:david.costa...@gmail.com
>>
>
>
>
> --
> Dave Costakos
> mailto:david.costa...@gmail.com
>
> _______________________________________________
> rhelv5-list mailing list
> rhelv5-list@redhat.com
> https://www.redhat.com/mailman/listinfo/rhelv5-list
>



-- 
Gerhardus Geldenhuis
_______________________________________________
rhelv5-list mailing list
rhelv5-list@redhat.com
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to