On Fri, Jan 09, 2015 at 03:58:16PM -0500, Martin K. Petersen wrote:
> (Still dreaming of a combined mpt2sas and mpt3sas so I wouldn't have to
> review everything twice).
We really need to start that action instead of dreaming. Once this
series is in I'll move the two drivers to at least a shared
> "Sreekanth" == Sreekanth Reddy writes:
Sreekanth> Change_set: 1. Added affinity_hint varable of type
Sreekanth> cpumask_var_t in adapter_reply_queue structure. And allocated
Sreekanth> a memory for this varable by calling alloc_cpumask_var.
Sreekanth> 2. Call the API irq_set_affinity_hint f
Martin,
I have kept yours CPUs affinity with reply queues logic as it is. I have just
replaced
do while loop with list_for_each_entry loop over reply queues and it doesn't
yours
CPUs affinity with reply queues logic, I am confident on it since I have tested
this code
in veriour senarioes i.e w
>> @@ -1609,6 +1611,10 @@ _base_request_irq(struct MPT3SAS_ADAPTER *ioc, u8
>> index, u32 vector)
>> reply_q->ioc = ioc;
>> reply_q->msix_index = index;
>> reply_q->vector = vector;
>> +
>> + if (!zalloc_cpumask_var(&reply_q->affinity_hint, GFP_KERNEL))
>> + return
> @@ -1373,20 +1380,30 @@ _base_assign_reply_queues(struct MPT2SAS_ADAPTER *ioc)
>
> cpu = cpumask_first(cpu_online_mask);
>
> - do {
> + list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
> +
>
> Why are you reverting to iterating over the queues? A while back I fixed
> -Original Message-
> From: linux-scsi-ow...@vger.kernel.org [mailto:linux-scsi-
> ow...@vger.kernel.org] On Behalf Of Sreekanth Reddy
> Sent: Tuesday, 09 December, 2014 6:17 AM
> To: martin.peter...@oracle.com; j...@kernel.org; h...@infradead.org
...
> Change_set:
> 1. Added affinity_hint
> "Sreekanth" == Sreekanth Reddy writes:
Sreekanth,
@@ -1373,20 +1380,30 @@ _base_assign_reply_queues(struct MPT2SAS_ADAPTER *ioc)
cpu = cpumask_first(cpu_online_mask);
- do {
+ list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
+
Why are you reverting to i
> Wouldn't it be better to do this in _base_assign_reply_queues since
> we're already iterating there?
Hi Martin,
As per your suggestion, I modified this feature with below changes.
Added a support to set cpu affinity mask for each MSIX vector enabled by the
HBA.
So that, runnig the irqbalancer
> "Sreekanth" == Sreekanth Reddy writes:
Sreekanth> Added a support to set cpu affinity mask for each MSIX vector
Sreekanth> enabled by the HBA, So that by runnig the irqbalancer,
Sreekanth> interrupts can be balanced among the cpus.
Wouldn't it be better to do this in _base_assign_reply_que
9 matches
Mail list logo