> > > - } else if (plug && (q->nr_hw_queues == 1 || q->mq_ops-
> >commit_rqs)) {
> > > + } else if (plug && q->mq_ops->commit_rqs) {
> > > /*
> > >* Use plugging if we have a ->commit_rqs() hook as well,
as
> > >* we know the driver uses bd->last in a smart fashion
>
> On Fri, Jun 14, 2019 at 01:28:47AM +0530, Kashyap Desai wrote:
> > Is there any changes in API blk_queue_virt_boundary? I could not find
> > relevant code which account for this. Can you help ?
> > Which git repo shall I use for testing ? That way I can confirm, I
>
> So before I respin this series, can you help with a way to figure out
for
> mpt3sas and megaraid if a given controller supports NVMe devices at all,
so
> that we don't have to set the virt boundary if not?
In MegaRaid we have below enum -VENTURA_SERIES and AERO_SERIES
supports NVME
e
>
> On Thu, Jun 06, 2019 at 09:07:27PM +0530, Kashyap Desai wrote:
> > Hi Christoph, Changes for and looks good. We
> > want to confirm few sanity before ACK. BTW, what benefit we will see
> > moving virt_boundry setting to SCSI mid layer ? Is it just modular
> >
>
> >
> > Please drop the patch in my last email, and apply the following patch
> > and see if we can make a difference:
>
> Ming,
>
> I dropped early patch and applied the below patched. Now, I am getting
> expected performance (3.0M IOPS).
> Below patch fix the performance issue. See perf repor
>
> This ensures all proper DMA layer handling is taken care of by the SCSI
> midlayer. Note that the effect is global, as the IOMMU merging is based
> off a
> paramters in struct device. We could still turn if off if no PCIe devices
> are
> present, but I don't know how to find that out.
>
> Als
>
> Please drop the patch in my last email, and apply the following patch
and see
> if we can make a difference:
Ming,
I dropped early patch and applied the below patched. Now, I am getting
expected performance (3.0M IOPS).
Below patch fix the performance issue. See perf report after applying t
> Meantime please try the following patch and see if difference can be
made.
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c index
> 49d73d979cb3..d2abec3b0f60 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -589,7 +589,7 @@ static void __blk_mq_complete_request(struct
> request *rq)
>
> SCSI's reply qeueue is very similar with blk-mq's hw queue, both
assigned by
> IRQ vector, so map te private reply queue into blk-mq's hw queue via
> .host_tagset.
>
> Then the private reply mapping can be removed.
>
> Another benefit is that the request/irq lost issue may be solved in
generic
>
>
> I actually took a look at scsi_host_find_tag() - what I think needs fixing
> here is
> that it should not return a tag that isn't allocated.
> You're just looking up random stuff, that is a recipe for disaster.
> But even with that, there's no guarantee that the tag isn't going away.
Got your
> >
> > At the time of device removal, it requires reverse traversing. Find
> > out if each requests associated with sdev is part of hctx->tags->rqs()
> > and clear that entry.
> > Not sure about atomic traverse if more than one device removal is
> > happening in parallel. May be more error pron
> On 12/18/18 10:48 AM, Kashyap Desai wrote:
> >>
> >> On 12/18/18 10:08 AM, Kashyap Desai wrote:
> >>>>>
> >>>>> Other block drivers (e.g. ib_srp, skd) do not need this to work
> >>>>> reliably.
> >>>>>
>
> On 12/18/18 10:08 AM, Kashyap Desai wrote:
> >>>
> >>> Other block drivers (e.g. ib_srp, skd) do not need this to work
> >>> reliably.
> >>> It has been explained to you that the bug that you reported can be
> >>> fixed by modif
> >
> > Other block drivers (e.g. ib_srp, skd) do not need this to work
> > reliably.
> > It has been explained to you that the bug that you reported can be
> > fixed by modifying the mpt3sas driver. So why to fix this by modifying
> > the block layer? Additionally, what prevents that a race condit
[mpt3sas]
Cc:
Signed-off-by: Kashyap Desai
Signed-off-by: Sreekanth Reddy
---
block/blk-mq.c | 4 +++-
block/blk-mq.h | 1 +
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 6a75662..88d1e92 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@
> > > On Thu, Dec 06, 2018 at 11:15:13AM +0530, Kashyap Desai wrote:
> > > > >
> > > > > If the 'tag' passed to scsi_host_find_tag() is valid, I think
there
> > > > > shouldn't have such issue.
>
> > On Thu, Dec 06, 2018 at 11:15:13AM +0530, Kashyap Desai wrote:
> > > >
> > > > If the 'tag' passed to scsi_host_find_tag() is valid, I think
there
> > > > shouldn't have such issue.
> > > >
> > > > If you w
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Friday, December 7, 2018 3:50 PM
> To: Kashyap Desai
> Cc: Bart Van Assche; linux-block; Jens Axboe; linux-scsi; Suganath Prabu
> Subramani; Sreekanth Reddy; Sathya Prakash Veerichetty
> Subjec
> On 12/5/18 10:45 PM, Kashyap Desai wrote:
> >>
> >> If the 'tag' passed to scsi_host_find_tag() is valid, I think there
> >> shouldn't have such issue.
> >>
> >> If you want to find outstanding IOs, maybe you can try
> >>
>
> If the 'tag' passed to scsi_host_find_tag() is valid, I think there
> shouldn't have such issue.
>
> If you want to find outstanding IOs, maybe you can try
> blk_mq_queue_tag_busy_iter()
> or blk_mq_tagset_busy_iter(), because you may not know if the passed
'tag'
> to
> scsi_host_find_tag() is
> -Original Message-
> From: Bart Van Assche [mailto:bvanass...@acm.org]
> Sent: Tuesday, December 4, 2018 10:45 PM
> To: Kashyap Desai; linux-block; Jens Axboe; Ming Lei; linux-scsi
> Cc: Suganath Prabu Subramani; Sreekanth Reddy; Sathya Prakash Veerichetty
> Subject:
> On Tue, Dec 04, 2018 at 03:30:11PM +0530, Kashyap Desai wrote:
> > Problem statement :
> > Whenever try to get outstanding request via scsi_host_find_tag,
> > block layer will return stale entries instead of actual outstanding
> > request. Kernel panic if stale entry i
+ Linux-scsi
> > diff --git a/block/blk-mq.h b/block/blk-mq.h
> > index 9497b47..57432be 100644
> > --- a/block/blk-mq.h
> > +++ b/block/blk-mq.h
> > @@ -175,6 +175,7 @@ static inline bool
> > blk_mq_get_dispatch_budget(struct blk_mq_hw_ctx *hctx)
> > static inline void __blk_mq_put_driver_tag(s
uest at ff800010
IP: [] mpt3sas_scsih_scsi_lookup_get+0x6c/0xc0 [mpt3sas]
PGD aa4414067 PUD 0
Oops: [#1] SMP
Call Trace:
[] mpt3sas_get_st_from_smid+0x1f/0x60 [mpt3sas]
[] scsih_shutdown+0x55/0x100 [mpt3sas]
Cc:
Signed-off-by: Kashyap Desai
Signed-off-by: Sreekanth Reddy
---
block/blk-mq
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Tuesday, July 10, 2018 6:34 AM
> To: Jens Axboe
> Cc: linux-block@vger.kernel.org; Ming Lei; Kashyap Desai; Laurence
Oberman;
> Omar Sandoval; Christoph Hellwig; Bart Van Assche; Hannes Reinecke
> -Original Message-
> From: Laurence Oberman [mailto:lober...@redhat.com]
> Sent: Monday, July 2, 2018 5:11 PM
> To: Ming Lei; Jens Axboe
> Cc: linux-block@vger.kernel.org; Kashyap Desai; Omar Sandoval; Christoph
> Hellwig; Bart Van Assche; Hannes Reinecke
> Subje
> Right.
>
> Kashyap, could you test the following patch?
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 2f20c9e3efda..7d972b1c3153 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1567,7 +1567,7 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx
> *hctx, struct blk_mq_ctx *ctx
>
> I guess we need to clean list after list_splice_tail in the 1/1 patch as
> following
> @@ -1533,19 +1533,19 @@ void blk_mq_insert_requests(struct
> blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
> struct list_head *list)
>
> {
> ...
> +
> + spin_lock(&ctx->lock);
>
ger.kernel.org; Ming Lei; Kashyap Desai; Laurence
Oberman;
> Omar Sandoval; Christoph Hellwig; Bart Van Assche; Hannes Reinecke
> Subject: [PATCH 0/3] blk-mq: improve IO perf in case of none io sched
>
> Hi,
>
> The 1st 2 patch improves ctx->lock uses, and it is observed that IOP
> > I have created internal code changes based on below RFC and using irq
> > poll CPU lockup issue is resolved.
> > https://www.spinics.net/lists/linux-scsi/msg116668.html
>
> Could we use the 1:1 mapping and not apply out-of-tree irq poll in the
> following test? So that we can keep at same page
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Wednesday, May 2, 2018 3:17 PM
> To: Kashyap Desai
> Cc: linux-s...@vger.kernel.org; linux-block@vger.kernel.org
> Subject: Re: Performance drop due to "blk-mq-sched: improve sequential
I/O
&g
Hi Ming,
I was running some performance test on latest 4.17-rc and figure out
performance drop (approximate 15% drop) due to below patch set.
https://marc.info/?l=linux-block&m=150802309522847&w=2
I observed drop on latest 4.16.6 stable and 4.17-rc kernel as well. Taking
bisect approach, figure o
Hi,
I am running FIO script on Linux 4.15. This is generic behavior even on
3.x kernels as well. I wanted to know if my observation is correct or not.
Here is FIO command -
numactl -C 0-2 fio single --bs=4k --iodepth=64 --rw=randread
--ioscheduler=none --group_report --numjobs=2
If driver is
> -Original Message-
> From: Artem Bityutskiy [mailto:dedeki...@gmail.com]
> Sent: Monday, March 19, 2018 8:12 PM
> To: h...@lst.de; Thomas Gleixner
> Cc: linux-block@vger.kernel.org; snit...@redhat.com; h...@suse.de;
> mr...@linux.ee; linux-s...@vger.kernel.org; don.br...@microsemi.com;
>
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Tuesday, March 13, 2018 3:13 PM
> To: James Bottomley; Jens Axboe; Martin K . Petersen
> Cc: Christoph Hellwig; linux-s...@vger.kernel.org; linux-
> bl...@vger.kernel.org; Meelis Roos; Don Br
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Friday, March 9, 2018 5:33 PM
> To: Kashyap Desai
> Cc: James Bottomley; Jens Axboe; Martin K . Petersen; Christoph Hellwig;
> linux-s...@vger.kernel.org; linux-block@vger.kernel.org; Meelis
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Friday, March 9, 2018 9:02 AM
> To: James Bottomley; Jens Axboe; Martin K . Petersen
> Cc: Christoph Hellwig; linux-s...@vger.kernel.org; linux-
> bl...@vger.kernel.org; Meelis Roos; Don Br
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Thursday, March 8, 2018 4:54 PM
> To: Kashyap Desai
> Cc: Jens Axboe; linux-block@vger.kernel.org; Christoph Hellwig; Mike
Snitzer;
> linux-s...@vger.kernel.org; Hannes Reinecke; Arun Easi; Omar Sa
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Thursday, March 8, 2018 6:46 AM
> To: Kashyap Desai
> Cc: Jens Axboe; linux-block@vger.kernel.org; Christoph Hellwig; Mike
Snitzer;
> linux-s...@vger.kernel.org; Hannes Reinecke; Arun Easi; Omar Sa
> >
> > Also one observation using V3 series patch. I am seeing below Affinity
> > mapping whereas I have only 72 logical CPUs. It means we are really
> > not going to use all reply queues.
> > e.a If I bind fio jobs on CPU 18-20, I am seeing only one reply queue
> > is used and that may lead to p
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Wednesday, March 7, 2018 10:58 AM
> To: Kashyap Desai
> Cc: Jens Axboe; linux-block@vger.kernel.org; Christoph Hellwig; Mike
Snitzer;
> linux-s...@vger.kernel.org; Hannes Reinecke; Arun Easi; Omar
ecke; Arun Easi; Omar Sandoval;
> Martin K . Petersen; James Bottomley; Christoph Hellwig; Kashyap Desai;
> Peter
> Rivera; Meelis Roos
> Subject: Re: [PATCH V3 1/8] scsi: hpsa: fix selection of reply queue
>
> On Fri, 2018-03-02 at 15:03 +, Don Brace wrote:
> > > -Or
> -Original Message-
> From: Laurence Oberman [mailto:lober...@redhat.com]
> Sent: Wednesday, February 28, 2018 9:52 PM
> To: Ming Lei; Kashyap Desai
> Cc: Jens Axboe; linux-block@vger.kernel.org; Christoph Hellwig; Mike
> Snitzer;
> linux-s...@vger.kernel.org; Hanne
gt; To: Jens Axboe; linux-block@vger.kernel.org; Christoph Hellwig; Mike
Snitzer
> Cc: linux-s...@vger.kernel.org; Hannes Reinecke; Arun Easi; Omar
Sandoval;
> Martin K . Petersen; James Bottomley; Christoph Hellwig; Don Brace;
Kashyap
> Desai; Peter Rivera; Laurence Oberman; Ming Lei
>
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Tuesday, February 13, 2018 6:11 AM
> To: Kashyap Desai
> Cc: Hannes Reinecke; Jens Axboe; linux-block@vger.kernel.org; Christoph
> Hellwig; Mike Snitzer; linux-s...@vger.kernel.org; Arun E
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Sunday, February 11, 2018 11:01 AM
> To: Kashyap Desai
> Cc: Hannes Reinecke; Jens Axboe; linux-block@vger.kernel.org; Christoph
> Hellwig; Mike Snitzer; linux-s...@vger.kernel.org; Arun E
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Friday, February 9, 2018 11:01 AM
> To: Kashyap Desai
> Cc: Hannes Reinecke; Jens Axboe; linux-block@vger.kernel.org; Christoph
> Hellwig; Mike Snitzer; linux-s...@vger.kernel.org; Arun Easi; Omar
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Thursday, February 8, 2018 10:23 PM
> To: Hannes Reinecke
> Cc: Kashyap Desai; Jens Axboe; linux-block@vger.kernel.org; Christoph
> Hellwig; Mike Snitzer; linux-s...@vger.kernel.org; Arun E
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Wednesday, February 7, 2018 5:53 PM
> To: Hannes Reinecke
> Cc: Kashyap Desai; Jens Axboe; linux-block@vger.kernel.org; Christoph
> Hellwig; Mike Snitzer; linux-s...@vger.kernel.org; Arun E
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Tuesday, February 6, 2018 6:02 PM
> To: Kashyap Desai
> Cc: Hannes Reinecke; Jens Axboe; linux-block@vger.kernel.org; Christoph
> Hellwig; Mike Snitzer; linux-s...@vger.kernel.org; Arun Easi; Omar
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Tuesday, February 6, 2018 1:35 PM
> To: Kashyap Desai
> Cc: Hannes Reinecke; Jens Axboe; linux-block@vger.kernel.org; Christoph
> Hellwig; Mike Snitzer; linux-s...@vger.kernel.org; Arun Easi; Omar
> > We still have more than one reply queue ending up completion one CPU.
>
> pci_alloc_irq_vectors(PCI_IRQ_AFFINITY) has to be used, that means
> smp_affinity_enable has to be set as 1, but seems it is the default
setting.
>
> Please see kernel/irq/affinity.c, especially irq_calc_affinity_vectors(
gt; Petersen;
> James Bottomley; Christoph Hellwig; Don Brace; Kashyap Desai; Peter
> Rivera;
> Paolo Bonzini; Laurence Oberman
> Subject: Re: [PATCH 0/5] blk-mq/scsi-mq: support global tags & introduce
> force_blk_mq
>
> On 02/03/2018 05:21 AM, Ming Lei wrote:
> >
d.org; linux-block@vger.kernel.org; paolo.vale...@linaro.org
> Subject: Re: Device or HBA level QD throttling creates randomness in
> sequetial workload
>
> On 01/30/2017 09:30 AM, Bart Van Assche wrote:
> > On Mon, 2017-01-30 at 19:22 +0530, Kashyap Desai wrote:
> >> - if
n SCSI.MQ mode.
+*/
+ if (!is_nonrot)
+ udelay(100);
+ }
cmd = megasas_get_cmd_fusion(instance, scmd->request->tag);
` Kashyap
> -Original Message-
> From: Kashyap Desai [mailto:kashyap.de...@broadcom.com]
> Sent: Tuesday, November 01, 2016 11:11 AM
> To: &
> Hi, Kashyap,
>
> I'm CC-ing Kent, seeing how this is his code.
Hi Jeff and Kent, See my reply inline.
>
> Kashyap Desai writes:
>
> > Objective of this patch is -
> >
> > To move code used in bcache module in block layer which is used to
> > fin
> -Original Message-
> From: kbuild test robot [mailto:l...@intel.com]
> Sent: Thursday, January 12, 2017 1:18 AM
> To: Kashyap Desai
> Cc: kbuild-...@01.org; linux-s...@vger.kernel.org;
linux-block@vger.kernel.org;
> ax...@kernel.dk; martin.peter...@oracle.com; j...@
gned-off-by: Kashyap desai
---
diff --git a/block/blk-core.c b/block/blk-core.c
index 14d7c07..2e93d14 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -693,6 +693,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask,
int node_id)
{
struct request_queue *q;
> -Original Message-
> From: linux-scsi-ow...@vger.kernel.org [mailto:linux-scsi-
> ow...@vger.kernel.org] On Behalf Of Jens Axboe
> Sent: Thursday, November 10, 2016 10:18 PM
> To: Hannes Reinecke; Christoph Hellwig
> Cc: SCSI Mailing List; linux-block@vger.kernel.org
> Subject: Re: Reduce
Read depth: 0 Write depth: 5
IO unplugs:79 Timer unplugs: 0
` Kashyap
> -Original Message-
> From: Jens Axboe [mailto:ax...@kernel.dk]
> Sent: Monday, October 31, 2016 10:54 PM
> To: Kashyap Desai; Omar
> -Original Message-
> From: Omar Sandoval [mailto:osan...@osandov.com]
> Sent: Monday, October 24, 2016 9:11 PM
> To: Kashyap Desai
> Cc: linux-s...@vger.kernel.org; linux-ker...@vger.kernel.org; linux-
> bl...@vger.kernel.org; ax...@kernel.dk; Christoph Hel
>
> On Fri, Oct 21, 2016 at 05:43:35PM +0530, Kashyap Desai wrote:
> > Hi -
> >
> > I found below conversation and it is on the same line as I wanted some
> > input from mailing list.
> >
> > http://marc.info/?l=linux-kernel&m=147569860526197&w=
here any workaround/alternative in latest upstream kernel, if user
wants to see limited penalty for Sequential Work load on HDD ?
` Kashyap
> -Original Message-----
> From: Kashyap Desai [mailto:kashyap.de...@broadcom.com]
> Sent: Thursday, October 20, 2016 3:39 PM
> To: linux-
63 matches
Mail list logo