On 2/6/18, 12:51 AM, "Lee Duncan" wrote:
>On 02/05/2018 11:15 AM, Lee Duncan wrote:
>> On 01/31/2018 10:57 PM, Nilesh Javali wrote:
>>> Adjust the NULL byte added by snprintf.
>>>
>>> Signed-off-by: Nilesh Javali
>>> ---
>>>
> > We still have more than one reply queue ending up completion one CPU.
>
> pci_alloc_irq_vectors(PCI_IRQ_AFFINITY) has to be used, that means
> smp_affinity_enable has to be set as 1, but seems it is the default
setting.
>
> Please see kernel/irq/affinity.c, especially
在 2018/2/5 23:20, Ming Lei 写道:
This patch uses .force_blk_mq to drive HPSA via SCSI_MQ, meantime maps
each reply queue to blk_mq's hw queue, then .queuecommand can always
choose the hw queue as the reply queue. And if no any online CPU is
mapped to one hw queue, request can't be submitted to
> This patch introduces the parameter of 'g_global_tags' so that we can
> test this feature by null_blk easiy.
>
> Not see obvious performance drop with global_tags when the whole hw
> depth is kept as same:
>
> 1) no 'global_tags', each hw queue depth is 1, and 4 hw queues
> modprobe null_blk
On 02/05/2018 11:15 AM, Lee Duncan wrote:
> On 01/31/2018 10:57 PM, Nilesh Javali wrote:
>> Adjust the NULL byte added by snprintf.
>>
>> Signed-off-by: Nilesh Javali
>> ---
>> drivers/scsi/qedi/qedi_main.c | 12 ++--
>> 1 file changed, 6 insertions(+), 6
On 01/31/2018 10:57 PM, Nilesh Javali wrote:
> Adjust the NULL byte added by snprintf.
>
> Signed-off-by: Nilesh Javali
> ---
> drivers/scsi/qedi/qedi_main.c | 12 ++--
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git
The libiscsi code was using both spin_lock()/spin_unlock()
and spin_lock_bh()/spin_unlock_bh() on its session lock.
In addition, lock validation found that libiscsi.c was
taking a HARDIRQ-unsafe lock while holding an HARDIRQ-
irq-safe lock:
> [ 2528.738454]
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Monday, February 05, 2018 9:21 AM
> To: Jens Axboe ; linux-bl...@vger.kernel.org; Christoph
> Hellwig ; Mike Snitzer
> Cc: linux-scsi@vger.kernel.org;
> -Original Message-
> From: Laurence Oberman [mailto:lober...@redhat.com]
> Sent: Monday, February 05, 2018 9:58 AM
> To: Ming Lei ; Jens Axboe ; linux-
> bl...@vger.kernel.org; Christoph Hellwig ; Mike Snitzer
>
> -Original Message-
> This is a critical issue on the HPSA because Linus already has the
> original commit that causes the system to fail to boot.
>
> All my testing was on DL380 G7 servers with:
>
> Hewlett-Packard Company Smart Array G6 controllers
> Vendor: HP Model: P410i
On Mon, 2018-02-05 at 23:20 +0800, Ming Lei wrote:
> This patch uses .force_blk_mq to drive HPSA via SCSI_MQ, meantime
> maps
> each reply queue to blk_mq's hw queue, then .queuecommand can always
> choose the hw queue as the reply queue. And if no any online CPU is
> mapped to one hw queue,
On 05/02/2018 16:20, Ming Lei wrote:
> Now 84676c1f21e8ff5(genirq/affinity: assign vectors to all possible CPUs)
> has been merged to V4.16-rc, and it is easy to allocate all offline CPUs
> for some irq vectors, this can't be avoided even though the allocation
> is improved.
>
> For example, on a
This patch uses .force_blk_mq to drive HPSA via SCSI_MQ, meantime maps
each reply queue to blk_mq's hw queue, then .queuecommand can always
choose the hw queue as the reply queue. And if no any online CPU is
mapped to one hw queue, request can't be submitted to this hw queue
at all, finally the
So that we can decide the default reply queue by the map created
during adding host.
Cc: Hannes Reinecke
Cc: Arun Easi
Cc: Omar Sandoval ,
Cc: "Martin K. Petersen" ,
Cc: James Bottomley
Now 84676c1f21e8ff5(genirq/affinity: assign vectors to all possible CPUs)
has been merged to V4.16-rc, and it is easy to allocate all offline CPUs
for some irq vectors, this can't be avoided even though the allocation
is improved.
For example, on a 8cores VM, 4~7 are not-present/offline, 4 queues
>From scsi driver view, it is a bit troublesome to support both blk-mq
and non-blk-mq at the same time, especially when drivers need to support
multi hw-queue.
This patch introduces 'force_blk_mq' to scsi_host_template so that drivers
can provide blk-mq only support, so driver code can avoid the
This patch introduces the parameter of 'g_global_tags' so that we can
test this feature by null_blk easiy.
Not see obvious performance drop with global_tags when the whole hw
depth is kept as same:
1) no 'global_tags', each hw queue depth is 1, and 4 hw queues
modprobe null_blk queue_mode=2
From: Hannes Reinecke
Add a host template flag 'host_tagset' to enable the use of a global
tagset for block-mq.
Cc: Hannes Reinecke
Cc: Arun Easi
Cc: Omar Sandoval ,
Cc: "Martin K. Petersen" ,
Cc:
Quite a few HBAs(such as HPSA, megaraid, mpt3sas, ..) support multiple
reply queues, but tags is often HBA wide.
These HBAs have switched to use pci_alloc_irq_vectors(PCI_IRQ_AFFINITY)
for automatic affinity assignment.
Now 84676c1f21e8ff5(genirq/affinity: assign vectors to all possible CPUs)
This patch changes tags->breserved_tags, tags->bitmap_tags and
tags->active_queues as pointer, and prepares for supporting global tags.
No functional change.
Tested-by: Laurence Oberman
Reviewed-by: Hannes Reinecke
Cc: Mike Snitzer
Cc:
Hi All,
This patchset supports global tags which was started by Hannes originally:
https://marc.info/?l=linux-block=149132580511346=2
Also inroduce 'force_blk_mq' and 'host_tagset' to 'struct scsi_host_template',
so that driver can avoid to support two IO paths(legacy and blk-mq),
On Sat, 2018-02-03 at 14:43 -0500, Laurence Oberman wrote:
> Hello Doug
>
> I had emailed you earlier about this issue forgetting to copy others.
>
> All test devices in the scsi_debug driver share the same ram space so
> we cannot really have individual devices for testing stuff like md-
>
If every_nth > 0, the injection flags must be reset for commands
that aren't supposed to fail (i.e. that aren't "nth"). Otherwise,
commands will continue to fail, like in the every_nth < 0 case.
Signed-off-by: Martin Wilck
---
drivers/scsi/scsi_debug.c | 7 ++-
1 file
> >>> -Original Message-
> >>> From: linux-scsi-ow...@vger.kernel.org [mailto:linux-scsi-
> >>> ow...@vger.kernel.org] On Behalf Of Asutosh Das
> >>> Sent: Tuesday, January 30, 2018 6:54 AM
> >>> To: subha...@codeaurora.org; c...@codeaurora.org;
> >>> vivek.gau...@codeaurora.org;
From: Gilad Broner
Different platforms may have different number of lanes for the UFS link.
Add parameter to device tree specifying how many lanes should be
configured for the UFS link. And don't print err message for clocks
that are optional, this leads to unnecessary
On Mon, Feb 05, 2018 at 07:54:29AM +0100, Hannes Reinecke wrote:
> On 02/03/2018 05:21 AM, Ming Lei wrote:
> > Quite a few HBAs(such as HPSA, megaraid, mpt3sas, ..) support multiple
> > reply queues, but tags is often HBA wide.
> >
> > These HBAs have switched to use
On Mon, Feb 05, 2018 at 07:58:29AM +0100, Hannes Reinecke wrote:
> On 02/03/2018 05:21 AM, Ming Lei wrote:
> > Hi All,
> >
> > This patchset supports global tags which was started by Hannes originally:
> >
> > https://marc.info/?l=linux-block=149132580511346=2
> >
> > Also inroduce
Hi Kashyap,
On Mon, Feb 05, 2018 at 12:35:13PM +0530, Kashyap Desai wrote:
> > -Original Message-
> > From: Hannes Reinecke [mailto:h...@suse.de]
> > Sent: Monday, February 5, 2018 12:28 PM
> > To: Ming Lei; Jens Axboe; linux-bl...@vger.kernel.org; Christoph Hellwig;
> > Mike Snitzer
> >
On 02/02/2018 06:39 PM, Steffen Maier wrote:
>
> On 02/02/2018 05:00 PM, Hannes Reinecke wrote:
>> On 01/26/2018 05:54 PM, Steffen Maier wrote:
>>> On 12/18/2017 09:31 AM, Hannes Reinecke wrote:
On 12/15/2017 07:08 PM, Steffen Maier wrote:
> On 12/14/2017 11:11 AM, Hannes Reinecke wrote:
29 matches
Mail list logo