On 12/20/15, 11:13 PM, "target-devel-ow...@vger.kernel.org on behalf of
Nicholas A. Bellinger" <target-devel-ow...@vger.kernel.org on behalf of
n...@linux-iscsi.org> wrote:

>On Thu, 2015-12-17 at 14:57 -0500, Himanshu Madhani wrote:
>> From: Quinn Tran <quinn.t...@qlogic.com>
>> 
>> At high traffic, the work queue can become a bottle neck.
>> Instead of putting each command on the work queue as 1 work
>> element, the fix would daisy chain a list of commands that came
>> from FW/interrupt under 1 work element to reduce lock contention.
>> 
>
>I'm wondering if we are better served by turning this into generic logic
>in kernel/workqueue.c to be used beyond qla2xxx, or:

QT> For every work element, the pool->lock is grabbed.  Unless the lock
can be grab 1 time and the rest of the work elements piggy back on it then
it¹s worth it to have a new kernel service.


>
>using WQ_UNBOUND (following iser-target) and verify if observed
>bottleneck is due to internal !(WQ_UNBOUND) usage in qla_target.c.

QT> we tried Unbound as one of the low hang fruits. However, Unbound has
negative affect when it comes to scaling.


>
>Out of curiosity, what level of performance improvement does this
>patch (as is) actually provide..?

QT> By itself ³without" this patch series would be minimal.  With the
patch series, it allows us to consistently maintain an additional (approx)
+50~70k IOPS @4k read. Otherwise, the load is not sustainable or has a
chance to increase.

The test I use is 2ports Initiator vs 2 ports target, 2 * Qlogic 16G dual
port, TCM ramdisk. 

>
>Thank you,
>
>--nab

>
>--
>To unsubscribe from this list: send the line "unsubscribe target-devel" in
>the body of a message to majord...@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html

<<attachment: winmail.dat>>

Reply via email to