478e-62a5-ca24-3b12-58f7d0563...@huawei.com/
Could you try the above solution and see if the lockup can be avoided?
John Garry
should have workable patch.
Thanks,
Ming Lei
On Mon, Aug 19, 2019 at 03:13:58PM +0200, Thomas Gleixner wrote:
> On Mon, 19 Aug 2019, Ming Lei wrote:
>
> > Cc: Jon Derrick
> > Cc: Jens Axboe
> > Reported-by: Jon Derrick
> > Reviewed-by: Jon Derrick
> > Reviewed-by: Keith Busch
>
> T
for each node.
Cc: Christoph Hellwig
Cc: Keith Busch
Cc: linux-n...@lists.infradead.org,
Cc: Jon Derrick
Cc: Jens Axboe
Reported-by: Jon Derrick
Reviewed-by: Jon Derrick
Reviewed-by: Keith Busch
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 239 +++---
1
e numa node is empty, simply not
spread vectors on this node.
Cc: Christoph Hellwig
Cc: Keith Busch
Cc: linux-n...@lists.infradead.org,
Cc: Jon Derrick
Cc: Jens Axboe
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 26 ++
1 file changed, 18 insertions(+), 8 del
ase that numvecs is > ncpus
- return -ENOMEM to API's caller
V2:
- add patch3
- start to allocate vectors from node with minimized CPU number,
then every node is guaranteed to be allocated at least one vector.
- avoid cross node spread
Ming Lei (2):
genirq/af
On Fri, Aug 16, 2019 at 11:56 PM Keith Busch wrote:
>
> On Thu, Aug 15, 2019 at 07:28:49PM -0700, Ming Lei wrote:
> > Now __irq_build_affinity_masks() spreads vectors evenly per node, and
> > all vectors may not be spread in case that each numa node has different
> > CPU
id kernel address.
>
> And also op doesn't look like a valid op value, it's 0x23, which has no
> flag bits set, but also doesn't match any of the values in req_opf.
>
> So I suspect data is pointing somewhere bogus. Or possibly it used to
> point at a blk_mq_alloc_data but doesn't anymore.
>
> Why that's happened I have no idea. I can't see any obvious commits in
> mainline or stable that mention anything similar, maybe someone on
> linux-block recognises it?
>
> cheers
Please try:
https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/commit/?h=for-5.4/block=556f36e90dbe7dded81f4fac084d2bc8a2458330
Strictly speaking, it is still a workaround, but it works in case that
CPU hotplug isn't
involved.
Thanks,
Ming Lei
for each node.
Cc: Christoph Hellwig
Cc: Keith Busch
Cc: linux-n...@lists.infradead.org,
Cc: Jon Derrick
Cc: Jens Axboe
Reported-by: Jon Derrick
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 223 --
1 file changed, 193 insertions(+), 30 deletions
s from node with minimized CPU number,
then every node is guaranteed to be allocated at least one vector.
- avoid cross node spread
Ming Lei (2):
genirq/affinity: Improve __irq_build_affinity_masks()
genirq/affinity: Spread vectors on node according to nr_cpu ratio
kern
e numa node is empty, simply not
spread vectors on this node.
Cc: Christoph Hellwig
Cc: Keith Busch
Cc: linux-n...@lists.infradead.org,
Cc: Jon Derrick
Cc: Jens Axboe
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 26 ++
1 file changed, 18 insertions(+), 8 del
On Tue, Aug 13, 2019 at 07:31:39PM +, Derrick, Jonathan wrote:
> Hi Ming,
>
> On Tue, 2019-08-13 at 16:14 +0800, Ming Lei wrote:
> > The two-stage spread is done on same irq vectors, and we just need that
> > either one stage covers all vector, not two stage work toge
: Christoph Hellwig
Cc: Keith Busch
Cc: linux-n...@lists.infradead.org,
Cc: Jon Derrick
Cc: Jens Axboe
Reported-by: Jon Derrick
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 246 +++---
1 file changed, 206 insertions(+), 40 deletions(-)
diff --git a/kernel
...@lists.infradead.org,
Cc: Jon Derrick
Cc: Jens Axboe
Fixes: 6da4b3ab9a6 ("genirq/affinity: Add support for allocating interrupt
sets")
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/irq/affinity.c b/kernel/irq/affini
d at least one vector.
- avoid cross node spread
Ming Lei (3):
genirq/affinity: Enhance warning check
genirq/affinity: Improve __irq_build_affinity_masks()
genirq/affinity: Spread vectors on node according to nr_cpu ratio
kernel/irq/affinity.c | 243 +++
e numa node is empty, simply not
spread vectors on this node.
Cc: Christoph Hellwig
Cc: Keith Busch
Cc: linux-n...@lists.infradead.org,
Cc: Jon Derrick
Cc: Jens Axboe
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 26 ++
1 file changed, 18 insertions(+), 8 del
On Tue, Aug 13, 2019 at 05:26:51PM +0800, Ming Lei wrote:
> On Tue, Aug 13, 2019 at 03:41:12PM +0800, Ming Lei wrote:
> > On Mon, Aug 12, 2019 at 09:27:18AM -0600, Keith Busch wrote:
> > > On Mon, Aug 12, 2019 at 05:57:08PM +0800, Ming Lei wrote:
> > > > Now __irq_
On Tue, Aug 13, 2019 at 03:41:12PM +0800, Ming Lei wrote:
> On Mon, Aug 12, 2019 at 09:27:18AM -0600, Keith Busch wrote:
> > On Mon, Aug 12, 2019 at 05:57:08PM +0800, Ming Lei wrote:
> > > Now __irq_build_affinity_masks() spreads vectors evenly per node, and
> > > al
e numa node is empty, simply not
spread vectors on this node.
Cc: Christoph Hellwig
Cc: Keith Busch
Cc: linux-n...@lists.infradead.org,
Cc: Jon Derrick
Cc: Jens Axboe
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 26 ++
1 file changed, 18 insertions(+), 8 del
...@lists.infradead.org,
Cc: Jon Derrick
Cc: Jens Axboe
Fixes: 6da4b3ab9a6 ("genirq/affinity: Add support for allocating interrupt
sets")
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/irq/affinity.c b/kernel/irq/affini
return -ENOMEM to API's caller
V2:
- add patch3
- start to allocate vectors from node with minimized CPU number,
then every node is guaranteed to be allocated at least one vector.
- avoid cross node spread
Ming Lei (3):
genirq/affinity: Enhance warning check
arning is triggered on above situation, and
allocation result was supposed to be 4 vectors for each node.
Cc: Christoph Hellwig
Cc: Keith Busch
Cc: linux-n...@lists.infradead.org,
Cc: Jon Derrick
Cc: Jens Axboe
Reported-by: Jon Derrick
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c
On Mon, Aug 12, 2019 at 09:27:18AM -0600, Keith Busch wrote:
> On Mon, Aug 12, 2019 at 05:57:08PM +0800, Ming Lei wrote:
> > Now __irq_build_affinity_masks() spreads vectors evenly per node, and
> > all vectors may not be spread in case that each numa node has different
arning is triggered on above situation, and
allocation result was supposed to be 4 vectors for each node.
Cc: Christoph Hellwig
Cc: Keith Busch
Cc: linux-n...@lists.infradead.org,
Cc: Jon Derrick
Cc: Jens Axboe
Reported-by: Jon Derrick
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c
ctor.
- avoid cross node spread
Ming Lei (3):
genirq/affinity: Improve __irq_build_affinity_masks()
genirq/affinity: Spread vectors on node according to nr_cpu ratio
genirq/affinity: Enhance warning check
kernel/irq/affinity.c | 140 --
1 file changed,
e numa node is empty, simply not
spread vectors on this node.
Cc: Christoph Hellwig
Cc: Keith Busch
Cc: linux-n...@lists.infradead.org,
Cc: Jon Derrick
Cc: Jens Axboe
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 26 ++
1 file changed, 18 insertions(+), 8 del
...@lists.infradead.org,
Cc: Jon Derrick
Cc: Jens Axboe
Fixes: 6da4b3ab9a6 ("genirq/affinity: Add support for allocating interrupt
sets")
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/irq/affinity.c b/kernel/irq/affini
On Sat, Aug 10, 2019 at 7:05 AM Ming Lei wrote:
>
> On Fri, Aug 9, 2019 at 10:44 PM Keith Busch wrote:
> >
> > On Fri, Aug 09, 2019 at 06:23:09PM +0800, Ming Lei wrote:
> > > One invariant of __irq_build_affinity_masks() is that all CPUs in the
> > > specified
ffinity set are less loaded than the one which handles the hard
> > interrupt.
>
> I will look to get some figures for CPU loading to back this up.
>
> >
> > This is heavily use case dependent I assume, so making this a general
> > change is perhaps not a good idea, but we could surely make this optional.
>
> That sounds reasonable. So would the idea be to enable this optionally
> at the request threaded irq call?
I'd suggest to do it for managed IRQ at default, because managed IRQ affinity
is NUMA locality and setup gracefully. And the idea behind is good since the IRQ
handler should have been run in the specified CPUs, especially the thread part
often takes more CPU.
Thanks,
Ming Lei
On Fri, Aug 9, 2019 at 10:44 PM Keith Busch wrote:
>
> On Fri, Aug 09, 2019 at 06:23:09PM +0800, Ming Lei wrote:
> > One invariant of __irq_build_affinity_masks() is that all CPUs in the
> > specified masks( cpu_mask AND node_to_cpumask for each node) should be
> > covered
y: Jon Derrick
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 23 +--
1 file changed, 17 insertions(+), 6 deletions(-)
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index bc3652a2c61b..76f3d1b27d00 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affi
, the warning report from Jon Derrick can be
fixed.
Please review & comment!
Ming Lei (2):
genirq/affinity: improve __irq_build_affinity_masks()
genirq/affinity: spread vectors on node according to nr_cpu ratio
kernel/irq/affinity.c | 46 +++
1 file cha
simply not
spread vectors on this node.
Cc: Christoph Hellwig
Cc: Keith Busch
Cc: linux-n...@lists.infradead.org,
Cc: Jon Derrick
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 33 +
1 file changed, 21 insertions(+), 12 deletions(-)
diff --git a/kernel/irq/affini
On Thu, Aug 08, 2019 at 10:32:24AM -0600, Keith Busch wrote:
> On Thu, Aug 08, 2019 at 09:04:28AM +0200, Thomas Gleixner wrote:
> > On Wed, 7 Aug 2019, Jon Derrick wrote:
> > > The current irq spreading algorithm spreads vectors amongst cpus evenly
> > > per node. If a node has more cpus than
Commit-ID: 491beed3b102b6e6c0e7734200661242226e3933
Gitweb: https://git.kernel.org/tip/491beed3b102b6e6c0e7734200661242226e3933
Author: Ming Lei
AuthorDate: Mon, 5 Aug 2019 09:19:06 +0800
Committer: Thomas Gleixner
CommitDate: Thu, 8 Aug 2019 08:47:55 +0200
genirq/affinity: Create
t commit b7e9e1fb7a92 ("scsi: implement .cleanup_rq
> callback") from block/for-next.
>
> Signed-off-by: Steffen Maier
> Fixes: 8930a6c20791 ("scsi: core: add support for request batching")
> Cc: Paolo Bonzini
> Cc: Ming Lei
> ---
> drivers/scsi/scsi_lib
On Wed, Aug 7, 2019 at 1:13 AM James Smart wrote:
>
> On 8/5/2019 6:09 PM, Ming Lei wrote:
> >
> > I am wondering why you use 2 * num_possible_nodes() as the limit instead of
> > num_possible_nodes(), could you explain it a bit?
>
> The number comes from most sys
On Tue, Jul 30, 2019 at 08:43:59AM +0800, Ming Lei wrote:
> On Thu, Jul 25, 2019 at 10:04:58AM +0800, Ming Lei wrote:
> > Hi,
> >
> > When one request is dispatched to LLD via dm-rq, if the result is
> > BLK_STS_*RESOURCE, dm-rq will free the request. However, LLD ma
On Thu, Jul 25, 2019 at 10:04:58AM +0800, Ming Lei wrote:
> Hi,
>
> When one request is dispatched to LLD via dm-rq, if the result is
> BLK_STS_*RESOURCE, dm-rq will free the request. However, LLD may allocate
> private data for this request, so this way will cause memory
On Fri, Jul 26, 2019 at 06:20:46PM +0200, Benjamin Block wrote:
> Hey Ming Lei,
>
> On Sat, Jul 20, 2019 at 11:06:35AM +0800, Ming Lei wrote:
> > Hi,
> >
> > When one request is dispatched to LLD via dm-rq, if the result is
> > BLK_STS_*RESOURCE, dm-rq will f
rl:
> https://github.com/0day-ci/linux/commits/Ming-Lei/blk-mq-add-callback-of-cleanup_rq/20190720-133431
>
>
> in testcase: blktests
> with following parameters:
>
> disk: 1SSD
> test: block-group1
>
>
>
> on test machine: qemu-system-x86_6
we have to
consider related race.
V2:
- run .cleanup_rq() in blk_mq_free_request(), as suggested by Mike
Ming Lei (2):
blk-mq: add callback of .cleanup_rq
scsi: implement .cleanup_rq callback
drivers/md/dm-rq.c | 1 +
drivers/scsi/scsi_lib.c | 13
.cleanup_rq() in generic rq free code(fast
path), cost will be introduced unnecessarily, also we have to
consider related race.
V2:
- run .cleanup_rq() in blk_mq_free_request(), as suggested by Mike
Ming Lei (2):
blk-mq: add callback of .cleanup_rq
scsi: implement
in scsi_mq_ops.
Cc: Ewan D. Milne
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Christoph Hellwig
Cc: Mike Snitzer
Cc: dm-devel@redhat.com
Cc:
Fixes: 396eaf21ee17 ("blk-mq: improve DM's blk-mq IO merging via
blk_insert_cloned_request feedback")
Signed-off-by: Ming Lei
---
drivers/
Hellwig
Cc: Mike Snitzer
Cc: dm-devel@redhat.com
Cc:
Fixes: 396eaf21ee17 ("blk-mq: improve DM's blk-mq IO merging via
blk_insert_cloned_request feedback")
Signed-off-by: Ming Lei
---
drivers/scsi/scsi_lib.c | 13 +
1 file changed, 13 insertions(+)
diff --git a/drivers/scsi/sc
mq IO merging via
blk_insert_cloned_request feedback")
Signed-off-by: Ming Lei
---
drivers/md/dm-rq.c | 1 +
include/linux/blk-mq.h | 13 +
2 files changed, 14 insertions(+)
diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
index c9e44ac1f9a6..21d5c1784d0c 100644
--- a/dri
.driver_data = NVME_QUIRK_SINGLE_VECTOR |
> - NVME_QUIRK_128_BYTES_SQES },
> + NVME_QUIRK_128_BYTES_SQES |
> + NVME_QUIRK_SHARED_TAGS },
> { 0, }
> };
> MODULE_DEVICE_TABLE(pci, nvme_id_table);
Looks fine for me:
Reviewed-by: Ming Lei
Thanks,
Ming Lei
On Mon, Jul 22, 2019 at 08:40:23AM -0700, Bart Van Assche wrote:
> On 7/19/19 8:06 PM, Ming Lei wrote:
> > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> > index e1da8c70a266..52537c145762 100644
> > --- a/drivers/scsi/scsi_lib.c
> > +++ b/drivers/scs
On Mon, Jul 22, 2019 at 09:51:27AM -0700, Bart Van Assche wrote:
> On 7/19/19 8:06 PM, Ming Lei wrote:
> > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > index b038ec680e84..fc38d95c557f 100644
> > --- a/block/blk-mq.c
> > +++ b/block/blk-mq.c
> > @@ -502,6 +
...@huawei.com/T/#t
V2:
- run .cleanup_rq() in blk_mq_free_request(), as suggested by Mike
Ming Lei (2):
blk-mq: add callback of .cleanup_rq
scsi: implement .cleanup_rq callback
block/blk-mq.c | 3 +++
drivers/scsi/scsi_lib.c | 28
include/linux/blk-mq.h
9
[5.82] ? scsi_scan_host+0x241/0x241
[5.82] async_run_entry_fn+0xdc/0x23d
[5.82] process_one_work+0x327/0x539
[5.82] worker_thread+0x330/0x492
[5.82] ? rescuer_thread+0x41f/0x41f
[5.82] kthread+0x1c6/0x1d5
[5.82] ? kthread_park+0xd3/0xd3
[5.82] ret_from_fork+0x1f/0x30
[5.82]
==
Thanks,
Ming Lei
: Christoph Hellwig
Cc: Mike Snitzer
Cc: dm-devel@redhat.com
Cc:
Fixes: 396eaf21ee17 ("blk-mq: improve DM's blk-mq IO merging via
blk_insert_cloned_request feedback")
Signed-off-by: Ming Lei
---
drivers/scsi/scsi_lib.c | 28
1 file changed, 20 insertions(+), 8
g via
blk_insert_cloned_request feedback")
Signed-off-by: Ming Lei
---
block/blk-mq.c | 3 +++
include/linux/blk-mq.h | 7 +++
2 files changed, 10 insertions(+)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index b038ec680e84..fc38d95c557f 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -50
pping_size() first.
Christoph has posted fix already, but looks not merged yet:
https://lkml.org/lkml/2019/7/17/62
Thanks,
Ming Lei
On Thu, Jul 18, 2019 at 10:52:01AM -0400, Mike Snitzer wrote:
> On Wed, Jul 17 2019 at 11:25pm -0400,
> Ming Lei wrote:
>
> > dm-rq needs to free request which has been dispatched and not completed
> > by underlying queue. However, the underlying queue may have alloca
/#t
Ming Lei (2):
blk-mq: add callback of .cleanup_rq
scsi: implement .cleanup_rq callback
drivers/md/dm-rq.c | 1 +
drivers/scsi/scsi_lib.c | 15 +++
include/linux/blk-mq.h | 13 +
3 files changed, 29 insertions(+)
Cc: Ewan D. Milne
Cc: Bart Van Assche
Cc
Hellwig
Cc: Mike Snitzer
Cc: dm-devel@redhat.com
Cc:
Fixes: 396eaf21ee17 ("blk-mq: improve DM's blk-mq IO merging via
blk_insert_cloned_request feedback")
Signed-off-by: Ming Lei
---
drivers/scsi/scsi_lib.c | 15 +++
1 file changed, 15 insertions(+)
diff --git a/drivers/scsi/
g via
blk_insert_cloned_request feedback")
Signed-off-by: Ming Lei
---
drivers/md/dm-rq.c | 1 +
include/linux/blk-mq.h | 13 +
2 files changed, 14 insertions(+)
diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
index c9e44ac1f9a6..21d5c1784d0c 100644
--- a/drivers/md/dm-rq.c
+++ b/
On Thu, Jul 11, 2019 at 11:36:56PM -0700, Sultan Alsawaf wrote:
> From: Sultan Alsawaf
>
> Typically, drivers allocate sg lists of sizes up to a few MiB in size.
> The current algorithm deals with large sg lists by splitting them into
> several smaller arrays and chaining them together. But if
, u16 hwq)
> +{
> + struct virtio_scsi *vscsi = shost_priv(shost);
> +
> + virtscsi_kick_vq(>req_vqs[hwq]);
> +}
> +
> /*
> * The host guarantees to respond to each command, although I/O
> * latencies might be higher than on bare metal. Reset the timer
> @@ -681,6 +705,7 @@ static struct scsi_host_template virtscsi_host_template =
> {
> .this_id = -1,
> .cmd_size = sizeof(struct virtio_scsi_cmd),
> .queuecommand = virtscsi_queuecommand,
> + .commit_rqs = virtscsi_commit_rqs,
> .change_queue_depth = virtscsi_change_queue_depth,
> .eh_abort_handler = virtscsi_abort,
> .eh_device_reset_handler = virtscsi_device_reset,
> --
> 2.21.0
>
Reviewed-by: Ming Lei
Thanks,
Ming Lei
On Wed, May 29, 2019 at 11:28 AM Ming Lei wrote:
>
> Hi,
>
> Looks ebpf trace doesn't work during cpu hotplug, see the following trace:
>
> 1) trace two functions called during CPU unplug via bcc/trace
>
> /usr/share/bcc/tools/trace -T 'takedown_cpu "%d", arg1
On Mon, Jun 3, 2019 at 4:16 PM Paolo Bonzini wrote:
>
> On 31/05/19 05:27, Ming Lei wrote:
> > It should be fine to implement scsi_commit_rqs() as:
> >
> > if (shost->hostt->commit_rqs)
> >shost->hostt->commit_rqs(shost, hctx->queue_num);
&g
zfcp_fc_exec_ct_job()
>zfcp_fsf_send_ct()
> zfcp_fsf_setup_ct_els() //see above
>
> If I was not mistaken above, the following could be more descriptive parts
> of a patch/commit description, with hopefully less confusion for anyone
> having to look at zfcp git history a fe
On Tue, Jun 25, 2019 at 12:01:24PM +1000, Finn Thain wrote:
> > diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c
> > index dccdb41bed8c..c7129f5234f0 100644
> > --- a/drivers/s390/scsi/zfcp_dbf.c
> > +++ b/drivers/s390/scsi/zfcp_dbf.c
> > @@ -552,7 +552,7 @@ static u16
On Mon, Jun 24, 2019 at 05:13:24PM +0200, Steffen Maier wrote:
> Hi Ming,
>
> On 6/18/19 3:37 AM, Ming Lei wrote:
> > Use the scatterlist iterators and remove direct indexing of the
> > scatterlist array.
> >
> > This way allows us to pre-allocate one small scat
t's why I didn't push. Appears to
> be hardware-related, though. Still looking into it.
Today I found the whole patchset disappears from 5.3/scsi-queue, seems
something is wrong?
Thanks,
Ming Lei
___
devel mailing list
de...@linuxdriverproject.org
http://driverd
s 256") was merged,
> and thought the THP swap code needn't to be changed. But apparently,
> I was wrong. I should have done this at that time.
>
> Fixes: 6861428921b5 ("block: always define BIO_MAX_PAGES as 256")
> Signed-off-by: "Huang, Ying"
> Cc: Ming Lei
>
On Mon, Jun 24, 2019 at 12:44:41PM +0800, Huang, Ying wrote:
> Ming Lei writes:
>
> > Hi Huang Ying,
> >
> > On Mon, Jun 24, 2019 at 10:23:36AM +0800, Huang, Ying wrote:
> >> From: Huang Ying
> >>
> >> 0-Day test system reported some OOM regr
IO_MAX_PAGES as 256") was merged,
> and thought the THP swap code needn't to be changed. But apparently,
> I was wrong. I should have done this at that time.
>
> Fixes: 6861428921b5 ("block: always define BIO_MAX_PAGES as 256")
> Signed-off-by: "Huang, Ying"
>
Reviewed-by: Bart Van Assche
Signed-off-by: Ming Lei
---
drivers/scsi/ppa.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/ppa.c b/drivers/scsi/ppa.c
index 35213082e933..a406cc825426 100644
--- a/drivers/scsi/ppa.c
+++ b/drivers/scsi/ppa.c
@@ -590,7 +590,7
From: Finn Thain
My understanding is that support for chained scatterlists is to
become mandatory for LLDs.
Use the scatterlist iterators and remove direct indexing of the
scatterlist array.
This way allows us to pre-allocate one small scatterlist, which can be
chained with one runtime
the change to replace SCp.buffers_residual with sg_is_last()
for fixing updating it, and the similar change has been applied on
NCR5380.c
Cc: Finn Thain
Signed-off-by: Ming Lei
---
drivers/scsi/aha152x.c | 46 +-
1 file changed, 23 insertions(+), 23 deletions
-Hartman
Cc: linux-...@vger.kernel.org
Reviewed-by: Bart Van Assche
Reviewed-by: Christoph Hellwig
Signed-off-by: Ming Lei
---
drivers/usb/image/microtek.c | 20
drivers/usb/image/microtek.h | 2 +-
2 files changed, 9 insertions(+), 13 deletions(-)
diff --git a/drivers/usb
Reviewed-by: Bart Van Assche
Signed-off-by: Ming Lei
---
drivers/scsi/pcmcia/nsp_cs.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/pcmcia/nsp_cs.c b/drivers/scsi/pcmcia/nsp_cs.c
index a81748e6e8fb..97416e1dcc5b 100644
--- a/drivers/scsi/pcmcia/nsp_cs.c
+++ b
: Greg Kroah-Hartman
Acked-by: Greg Kroah-Hartman
Reviewed-by: Christoph Hellwig
Reviewed-by: Bart Van Assche
Signed-off-by: Ming Lei
---
drivers/staging/unisys/visorhba/visorhba_main.c | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/drivers/staging/unisys/visorhba
Reviewed-by: Bart Van Assche
Signed-off-by: Ming Lei
---
drivers/scsi/wd33c93.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/wd33c93.c b/drivers/scsi/wd33c93.c
index 74be04f2357c..ae5935c0a149 100644
--- a/drivers/scsi/wd33c93.c
+++ b/drivers/scsi/wd33c93.c
Reviewed-by: Bart Van Assche
Signed-off-by: Ming Lei
---
drivers/scsi/pmcraid.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/scsi/pmcraid.c b/drivers/scsi/pmcraid.c
index e338d7a4f571..286cac59cb5f 100644
--- a/drivers/scsi/pmcraid.c
+++ b/drivers/scsi
...@driverdev.osuosl.org
Cc: Greg Kroah-Hartman
Signed-off-by: Ming Lei
---
drivers/staging/rts5208/rtsx_transport.c | 32 +++-
drivers/staging/rts5208/rtsx_transport.h | 2 +-
drivers/staging/rts5208/spi.c| 14 ++-
3 files changed, 24 insertions(+), 24 deletions
Reviewed-by: Bart Van Assche
Signed-off-by: Ming Lei
---
drivers/scsi/imm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/imm.c b/drivers/scsi/imm.c
index 64ae418d29f3..56d29f157749 100644
--- a/drivers/scsi/imm.c
+++ b/drivers/scsi/imm.c
@@ -686,7 +686,7
Block
Cc: Martin Schwidefsky
Cc: Heiko Carstens
Cc: linux-s...@vger.kernel.org
Acked-by: Benjamin Block
Reviewed-by: Christoph Hellwig
Reviewed-by: Bart Van Assche
Signed-off-by: Ming Lei
---
drivers/s390/scsi/zfcp_fc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git
Signed-off-by: Ming Lei
---
drivers/scsi/ipr.c | 29 -
1 file changed, 16 insertions(+), 13 deletions(-)
diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
index 6d053e220153..bf17540affbc 100644
--- a/drivers/scsi/ipr.c
+++ b/drivers/scsi/ipr.c
@@ -3915,22 +3915,23
Reviewed-by: Christoph Hellwig
Reviewed-by: Bart Van Assche
Signed-off-by: Ming Lei
---
drivers/scsi/advansys.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/advansys.c b/drivers/scsi/advansys.c
index d37584403c33..b87de8d3d844 100644
--- a/drivers/scsi/advansys.c
Reviewed-by: Bart Van Assche
Reviewed-by: Ewan D. Milne
Signed-off-by: Ming Lei
---
drivers/scsi/mvumi.c | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/drivers/scsi/mvumi.c b/drivers/scsi/mvumi.c
index a5410615edac..0022cd31500a 100644
--- a/drivers/scsi/mvumi.c
+++ b
Reviewed-by: Christoph Hellwig
Reviewed-by: Bart Van Assche
Signed-off-by: Ming Lei
---
drivers/scsi/lpfc/lpfc_nvmet.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index f3d9a5545164..3f803982bd1e 100644
scsi command(9 drivers found)
- run 'git grep -E "SCp.buffer"' to find direct sgl uses
from SCp.buffer(6 drivers are found)
Finn Thain (2):
scsi: aha152x: use sg helper to operate scatterlist
NCR5380: Support chained sg lists
Ming Lei (14):
scsi: vmw_pscsi: u
Reviewed-by: Christoph Hellwig
Reviewed-by: Bart Van Assche
Signed-off-by: Ming Lei
---
drivers/scsi/vmw_pvscsi.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/vmw_pvscsi.c b/drivers/scsi/vmw_pvscsi.c
index ecee4b3ff073..d71abd416eb4 100644
--- a/drivers/scsi
= 0x,
.max_segment_size should be aligned, either setting it here correctly or
forcing to make it aligned in scsi-core.
Thanks,
Ming Lei
_segment_size(q));
The patch looks fine, also suggest to make sure that max_segment_size
is block-size aligned, and un-aligned max segment size has caused trouble
on mmc.
Thanks,
Ming Lei
On Tue, Jun 18, 2019 at 09:35:48AM +1000, Finn Thain wrote:
> On Mon, 17 Jun 2019, Finn Thain wrote:
>
> > On Mon, 17 Jun 2019, Ming Lei wrote:
> >
> > > Use the scatterlist iterators and remove direct indexing of the
> > > scatterlist array.
> > >
On Mon, Jun 17, 2019 at 10:27:06AM +0200, Christoph Hellwig wrote:
> On Mon, Jun 17, 2019 at 11:03:42AM +0800, Ming Lei wrote:
> > Use the scatterlist iterators and remove direct indexing of the
> > scatterlist array.
> >
> > This way allows us to pre-allocate one s
On Mon, Jun 17, 2019 at 10:24:23AM +0200, Christoph Hellwig wrote:
> > - for (i = 0; i < (len / bsize_elem); i++, buffer += bsize_elem) {
> > - struct page *page = sg_page([i]);
> > + for (i = 0; i < (len / bsize_elem); i++, sg = sg_next(sg), buffer +=
> > bsize_elem) {
>
> Please
From: Finn Thain
My understanding is that support for chained scatterlists is to
become mandatory for LLDs.
Use the scatterlist iterators and remove direct indexing of the
scatterlist array.
This way allows us to pre-allocate one small scatterlist, which can be
chained with one runtime
Use the scatterlist iterators and remove direct indexing of the
scatterlist array.
This way allows us to pre-allocate one small scatterlist, which can be
chained with one runtime allocated scatterlist if the pre-allocated one
isn't enough for the whole request.
Signed-off-by: Ming Lei
Use the scatterlist iterators and remove direct indexing of the
scatterlist array.
This way allows us to pre-allocate one small scatterlist, which can be
chained with one runtime allocated scatterlist if the pre-allocated one
isn't enough for the whole request.
Signed-off-by: Ming Lei
SCp.buffers_residual with sg_is_last()
for fixing updating it, and the similar change has been applied on
NCR5380.c
Cc: Finn Thain
Signed-off-by: Ming Lei
---
drivers/scsi/aha152x.c | 42 --
1 file changed, 20 insertions(+), 22 deletions(-)
diff --git
Use the scatterlist iterators and remove direct indexing of the
scatterlist array.
This way allows us to pre-allocate one small scatterlist, which can be
chained with one runtime allocated scatterlist if the pre-allocated one
isn't enough for the whole request.
Signed-off-by: Ming Lei
Use the scatterlist iterators and remove direct indexing of the
scatterlist array.
This way allows us to pre-allocate one small scatterlist, which can be
chained with one runtime allocated scatterlist if the pre-allocated one
isn't enough for the whole request.
Signed-off-by: Ming Lei
Block
Cc: Martin Schwidefsky
Cc: Heiko Carstens
Cc: linux-s...@vger.kernel.org
Acked-by: Benjamin Block
Signed-off-by: Ming Lei
---
drivers/s390/scsi/zfcp_fc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/s390/scsi/zfcp_fc.c b/drivers/s390/scsi/zfcp_fc.c
index
...@driverdev.osuosl.org
Cc: Greg Kroah-Hartman
Signed-off-by: Ming Lei
---
drivers/staging/rts5208/rtsx_transport.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/rts5208/rtsx_transport.c
b/drivers/staging/rts5208/rtsx_transport.c
index 8277d7895608..407c9079b052
Use the scatterlist iterators and remove direct indexing of the
scatterlist array.
This way allows us to pre-allocate one small scatterlist, which can be
chained with one runtime allocated scatterlist if the pre-allocated one
isn't enough for the whole request.
Signed-off-by: Ming Lei
: Greg Kroah-Hartman
Acked-by: Greg Kroah-Hartman
Signed-off-by: Ming Lei
---
drivers/staging/unisys/visorhba/visorhba_main.c | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/drivers/staging/unisys/visorhba/visorhba_main.c
b/drivers/staging/unisys/visorhba
701 - 800 of 12343 matches
Mail list logo