; are not as easily to brick as they were reported to be three or four
> years ago.
I remember that DC SSDs often don't support BG GC.
Thanks,
Ming Lei
Hi Masami,
On Sat, Jun 20, 2020 at 10:37:47AM +0900, Masami Hiramatsu wrote:
> Hi Ming,
>
> On Sat, 20 Jun 2020 07:28:20 +0800
> Ming Lei wrote:
>
> > >
> > > Ah, after all it is as expected. With your kconfig, the kernel is
> > > very agressively opt
On Sat, Jun 20, 2020 at 07:14:51AM +0800, Ming Lei wrote:
> On Fri, Jun 19, 2020 at 07:04:05PM -0400, Mike Snitzer wrote:
> > On Fri, Jun 19 2020 at 6:52pm -0400,
> > Ming Lei wrote:
> >
> > > On Sat, Jun 20, 2020 at 06:37:44AM +0800, Ming Lei wrote:
> > &g
Hi Masami,
On Sat, Jun 20, 2020 at 12:35:09AM +0900, Masami Hiramatsu wrote:
> Hi Ming,
>
> On Fri, 19 Jun 2020 21:32:40 +0800
> Ming Lei wrote:
>
> > On Fri, Jun 19, 2020 at 08:19:54AM -0400, Steven Rostedt wrote:
> > > On Fri, 19 Jun 2020 15:28:
On Fri, Jun 19, 2020 at 07:04:05PM -0400, Mike Snitzer wrote:
> On Fri, Jun 19 2020 at 6:52pm -0400,
> Ming Lei wrote:
>
> > On Sat, Jun 20, 2020 at 06:37:44AM +0800, Ming Lei wrote:
> > > On Fri, Jun 19, 2020 at 01:40:41PM -0400, Mike Snitzer wrote:
> > > >
On Sat, Jun 20, 2020 at 06:37:44AM +0800, Ming Lei wrote:
> On Fri, Jun 19, 2020 at 01:40:41PM -0400, Mike Snitzer wrote:
> > On Fri, Jun 19 2020 at 12:06pm -0400,
> > Mike Snitzer wrote:
> >
> > > On Fri, Jun 19 2020 at 6:11am -0400,
> > >
On Fri, Jun 19, 2020 at 01:40:41PM -0400, Mike Snitzer wrote:
> On Fri, Jun 19 2020 at 12:06pm -0400,
> Mike Snitzer wrote:
>
> > On Fri, Jun 19 2020 at 6:11am -0400,
> > Ming Lei wrote:
> >
> > > Hi Mike,
> > >
> > > On Fri, Jun 19, 20
On Fri, Jun 19, 2020 at 12:06:57PM -0400, Mike Snitzer wrote:
> On Fri, Jun 19 2020 at 6:11am -0400,
> Ming Lei wrote:
>
> > Hi Mike,
> >
> > On Fri, Jun 19, 2020 at 05:42:50AM -0400, Mike Snitzer wrote:
> > > Hi Ming,
> > >
> >
On Fri, Jun 19, 2020 at 08:19:54AM -0400, Steven Rostedt wrote:
> On Fri, 19 Jun 2020 15:28:59 +0800
> Ming Lei wrote:
>
> > >
> > > OK, then let's make events (for sure)
> > >
> > > root@devnote2:/sys/kernel/debug/tracing# echo p __blkdev_put >
Hi Mike,
On Fri, Jun 19, 2020 at 05:42:50AM -0400, Mike Snitzer wrote:
> Hi Ming,
>
> Thanks for the patch! But I'm having a hard time understanding what
> you've written in the patch header,
>
> On Fri, Jun 19 2020 at 4:42am -0400,
> Ming Lei wrote:
>
> > dm
() and dm_stop_queue
may be called when synchronize_rcu from another blk_mq_quiesce_queue is
in-progress.
Cc: linux-bl...@vger.kernel.org
Signed-off-by: Ming Lei
---
drivers/md/dm-rq.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
index f60c02512121
Hi Masami,
On Fri, Jun 19, 2020 at 02:12:39PM +0900, Masami Hiramatsu wrote:
> Hi Ming,
>
> On Fri, 19 Jun 2020 07:19:01 +0800
> Ming Lei wrote:
>
> > > I'm using 5.4 on ubuntu and can not reproduce it with kprobe_event.
> > >
> > > root@devnote2
On Thu, Jun 18, 2020 at 10:56:02PM +0900, Masami Hiramatsu wrote:
> Hi Ming,
>
> On Thu, 18 Jun 2020 20:54:38 +0800
> Ming Lei wrote:
>
> > On Wed, Jun 17, 2020 at 06:30:39PM +0800, Ming Lei wrote:
> > > Hello Guys,
> > >
> > > I found probe
On Wed, Jun 17, 2020 at 06:30:39PM +0800, Ming Lei wrote:
> Hello Guys,
>
> I found probe on __blkdev_put is missed, which can be observed
> via bcc/perf reliably:
>
> 1) start trace
> - perf probe __blkdev_put
> - perf trace -a -e probe:__blkdev_put
>
> or
>
special thing about __blkdev_put() is that the function will call into
itself. However, no such issue on __blkdev_get() which calls itself too.
Thanks,
Ming Lei
truct blk_mq_tags *tags, busy_tag_iter_fn *fn,
> void *priv)
> {
> - return __blk_mq_all_tag_iter(tags, fn, priv, BT_TAG_ITER_STATIC_RQS);
> + __blk_mq_all_tag_iter(tags, fn, priv, BT_TAG_ITER_STATIC_RQS);
> }
Reviewed-by: Ming Lei
--
Ming
On Mon, Jun 08, 2020 at 09:07:24PM -0700, Josh Snyder wrote:
> Previously, we performed truncation of I/O issue/completion times during
> calculation of io_ticks, counting only I/Os which cross a jiffy
> boundary. The effect is a sampling of I/Os: at every boundary between
> jiffies we ask "is
On Mon, Jun 08, 2020 at 09:07:23PM -0700, Josh Snyder wrote:
> Previously, io_ticks could be under-counted. Consider these I/Os along
> the time axis (in jiffies):
>
> t 012345678
> io1||
> io2|---|
In current way, when io2 is done, io tickes should be 5,
ke
> advantage of inline encryption hardware if present.
>
> Symbol: BLK_INLINE_ENCRYPTION [=n]
> Type : bool
> Defined at block/Kconfig:189
>Prompt: Enable inline encryption support in block layer
>Depends on: BLOCK [=y]
>Location:
> -> Enable the block laye
KERN_ERR "could not attach integrity payload\n");
> - kfree(buf);
> status = BLK_STS_RESOURCE;
> goto err_end_io;
> }
Looks correct, and it relies on the fact the 1st 'page' is always added
successfully, so 'buf' is always attached to the bip since then:
Reviewed-by: Ming Lei
thanks,
Ming
ta = {
> .q = rq->q,
> + .ctx = rq->mq_ctx,
> .hctx = rq->mq_hctx,
> .flags = BLK_MQ_REQ_NOWAIT,
> .cmd_flags = rq->cmd_flags,
Reviewed-by: Ming Lei
--
Ming
Hi Paul,
On Thu, May 28, 2020 at 08:07:28PM -0700, Paul E. McKenney wrote:
> On Fri, May 29, 2020 at 09:53:04AM +0800, Ming Lei wrote:
> > Hi Paul,
> >
> > Thanks for your response!
> >
> > On Thu, May 28, 2020 at 10:21:21AM -0700, Paul E. McKenney wrote:
&g
Hi Paul,
Thanks for your response!
On Thu, May 28, 2020 at 10:21:21AM -0700, Paul E. McKenney wrote:
> On Thu, May 28, 2020 at 06:37:47AM -0700, Bart Van Assche wrote:
> > On 2020-05-27 22:19, Ming Lei wrote:
> > > On Wed, May 27, 2020 at 08:33:48PM -0700, Bart Van Assch
On Thu, May 28, 2020 at 06:37:47AM -0700, Bart Van Assche wrote:
> On 2020-05-27 22:19, Ming Lei wrote:
> > On Wed, May 27, 2020 at 08:33:48PM -0700, Bart Van Assche wrote:
> >> My understanding is that operations that have acquire semantics pair
> >> with operations tha
On Wed, May 27, 2020 at 08:33:48PM -0700, Bart Van Assche wrote:
> On 2020-05-27 18:46, Ming Lei wrote:
> > On Wed, May 27, 2020 at 04:09:19PM -0700, Bart Van Assche wrote:
> >> On 2020-05-27 11:06, Christoph Hellwig wrote:
> >>> --- a/block/blk-mq-tag.c
; */
> static void nvme_reap_pending_cqes(struct nvme_dev *dev)
> {
> int i;
>
> - for (i = dev->ctrl.queue_count - 1; i > 0; i--)
> + for (i = dev->ctrl.queue_count - 1; i > 0; i--) {
> + spin_lock(>queues[i].cq_poll_lock);
> nvme_process_cq(>queues[i]);
> + spin_unlock(>queues[i].cq_poll_lock);
> + }
> }
Looks a real race, and the fix is fine:
Reviewed-by: Ming Lei
thanks,
Ming Lei
On Thu, May 21, 2020 at 08:39:16PM +0200, Thomas Gleixner wrote:
> Ming,
>
> Ming Lei writes:
> > On Thu, May 21, 2020 at 10:13:59AM +0200, Thomas Gleixner wrote:
> >> Ming Lei writes:
> >> > On Thu, May 21, 2020 at 12:14:18AM +0200, Thomas Gleixner wrote:
Hi Thomas,
On Thu, May 21, 2020 at 10:13:59AM +0200, Thomas Gleixner wrote:
> Ming Lei writes:
> > On Thu, May 21, 2020 at 12:14:18AM +0200, Thomas Gleixner wrote:
> >> When the CPU is finally offlined, i.e. the CPU cleared the online bit in
> >> the online mask is
On Thu, May 21, 2020 at 12:14:18AM +0200, Thomas Gleixner wrote:
> Jens Axboe writes:
>
> > On 5/20/20 1:41 PM, Thomas Gleixner wrote:
> >> Jens Axboe writes:
> >>> On 5/20/20 8:45 AM, Jens Axboe wrote:
> It just uses kthread_create_on_cpu(), nothing home grown. Pretty sure
> they
On Wed, May 20, 2020 at 09:18:23AM +0800, Ming Lei wrote:
> On Tue, May 19, 2020 at 05:30:00PM +0200, Christoph Hellwig wrote:
> > On Tue, May 19, 2020 at 09:54:20AM +0800, Ming Lei wrote:
> > > As Thomas clarified, workqueue hasn't such issue any more, and only other
> >
On Tue, May 19, 2020 at 05:30:00PM +0200, Christoph Hellwig wrote:
> On Tue, May 19, 2020 at 09:54:20AM +0800, Ming Lei wrote:
> > As Thomas clarified, workqueue hasn't such issue any more, and only other
> > per CPU kthreads can run until the CPU clears the online bit.
> >
igned intflush_pending_idx:1;
> unsigned intflush_running_idx:1;
> blk_status_trq_status;
> --
> 2.17.1
>
Reviewed-by: Ming Lei
--
Ming Lei
: dm-devel@redhat.com
Reviewed-by: Hannes Reinecke
Reviewed-by: Christoph Hellwig
Reviewed-by: Martin K. Petersen
Signed-off-by: Ming Lei
---
block/blk-core.c | 31 ++-
block/blk.h | 2 ++
2 files changed, 20 insertions(+), 13 deletions(-)
diff --git a/block
Signed-off-by: Ming Lei
---
block/blk-core.c | 4
1 file changed, 4 insertions(+)
diff --git a/block/blk-core.c b/block/blk-core.c
index cf5b2163edfe..08ee92baa451 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1669,8 +1669,12 @@ int blk_rq_prep_clone(struct request *rq, struct
On Mon, May 11, 2020 at 11:26:07PM -0700, Sagi Grimberg wrote:
>
> > > devices will benefit from the batching so maybe the flag needs to be
> > > inverted? BLK_MQ_F_DONT_BATCHING_SUBMISSION?
> >
> > Actually I'd rather to not add any flag, and we may use some algorithm
> > (maybe EWMA or other
On Mon, May 11, 2020 at 02:23:14AM -0700, Sagi Grimberg wrote:
>
> > > > Basically, my idea is to dequeue request one by one, and for each
> > > > dequeued request:
> > > >
> > > > - we try to get a budget and driver tag, if both succeed, add the
> > > > request to one per-task list which can be
On Sun, May 10, 2020 at 12:44:53AM -0700, Sagi Grimberg wrote:
>
> > > > > You're mostly correct. This is exactly why an I/O scheduler may be
> > > > > applicable here IMO. Mostly because I/O schedulers tend to optimize
> > > > > for
> > > > > something specific and always present tradeoffs.
On Sat, May 09, 2020 at 04:57:48PM +0800, Baolin Wang wrote:
> On Sat, May 9, 2020 at 7:22 AM Ming Lei wrote:
> >
> > Hi Sagi,
> >
> > On Fri, May 08, 2020 at 03:19:45PM -0700, Sagi Grimberg wrote:
> > > Hey Ming,
> > >
> > > > > Woul
On Fri, May 08, 2020 at 06:15:02PM +0200, Christoph Hellwig wrote:
> Hi all,
>
> various bio based drivers use queue->queuedata despite already having
> set up disk->private_data, which can be used just as easily. This
> series cleans them up to only use a single private data pointer.
>
>
Hi Sagi,
On Fri, May 08, 2020 at 03:19:45PM -0700, Sagi Grimberg wrote:
> Hey Ming,
>
> > > Would it make sense to elevate this flag to a request_queue flag
> > > (QUEUE_FLAG_ALWAYS_COMMIT)?
> >
> > request queue flag usually is writable, however this case just needs
> > one read-only flag, so
On Fri, May 08, 2020 at 06:15:02PM +0200, Christoph Hellwig wrote:
> Hi all,
>
> various bio based drivers use queue->queuedata despite already having
> set up disk->private_data, which can be used just as easily. This
> series cleans them up to only use a single private data pointer.
>
>
On Fri, May 08, 2020 at 06:15:02PM +0200, Christoph Hellwig wrote:
> Hi all,
>
> various bio based drivers use queue->queuedata despite already having
> set up disk->private_data, which can be used just as easily. This
> series cleans them up to only use a single private data pointer.
>
>
On Fri, May 08, 2020 at 02:35:35PM -0700, Sagi Grimberg wrote:
>
> > > diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
> > > index f389d7c724bd..6a20f8e8eb85 100644
> > > --- a/include/linux/blk-mq.h
> > > +++ b/include/linux/blk-mq.h
> > > @@ -391,6 +391,7 @@ struct blk_mq_ops {
> >
vaguely related cleanups.
>
> Changes since v2:
> - switch vboxsf to a shorter bdi name
>
> Changes since v1:
> - use a static dev_name buffer inside struct backing_dev_info
>
Looks fine:
Reviewed-by: Ming Lei
--
Ming
: dm-devel@redhat.com
Reviewed-by: Hannes Reinecke
Reviewed-by: Christoph Hellwig
Reviewed-by: Martin K. Petersen
Tested-by: John Garry
Signed-off-by: Ming Lei
---
block/blk-core.c | 31 ++-
block/blk.h | 2 ++
2 files changed, 20 insertions(+), 13 deletions
Tested-by: John Garry
Signed-off-by: Ming Lei
---
block/blk-core.c | 4
1 file changed, 4 insertions(+)
diff --git a/block/blk-core.c b/block/blk-core.c
index 7f11560bfddb..1fe73051fec3 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1630,8 +1630,12 @@ int blk_rq_prep_clone
: dm-devel@redhat.com
Reviewed-by: Hannes Reinecke
Reviewed-by: Christoph Hellwig
Reviewed-by: Martin K. Petersen
Signed-off-by: Ming Lei
---
block/blk-core.c | 31 ++-
block/blk.h | 2 ++
2 files changed, 20 insertions(+), 13 deletions(-)
diff --git a/block
Signed-off-by: Ming Lei
---
block/blk-core.c | 4
1 file changed, 4 insertions(+)
diff --git a/block/blk-core.c b/block/blk-core.c
index 7f11560bfddb..1fe73051fec3 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1630,8 +1630,12 @@ int blk_rq_prep_clone(struct request *rq, struct
On Wed, Apr 29, 2020 at 04:03:32PM +0200, Martijn Coenen wrote:
> Ensuring we don't truncate loff_t when casting to sector_t is done in
> multiple places; factor it out.
>
> Signed-off-by: Martijn Coenen
> ---
> drivers/block/loop.c | 25 -
> 1 file changed, 20
"loop");
>
> misc_deregister(_misc);
> +
> + mutex_unlock(_ctl_mutex);
> }
>
> module_init(loop_init);
> --
> 2.25.1
>
Reviewed-by: Ming Lei
--
Ming
clone nr_integrity_segments and write_hint in blk_rq_prep_clone.
Cc: John Garry
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Christoph Hellwig
Cc: Thomas Gleixner
Cc: Mike Snitzer
Cc: dm-devel@redhat.com
Signed-off-by: Ming Lei
---
block/blk-core.c | 4
1 file changed, 4 insertions
: dm-devel@redhat.com
Signed-off-by: Ming Lei
---
block/blk-core.c | 33 +++--
block/blk.h | 2 ++
2 files changed, 21 insertions(+), 14 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 91537e526b45..76405551d09e 100644
--- a/block/blk-core.c
On Wed, Mar 11, 2020 at 12:25:38AM +0800, John Garry wrote:
> From: Hannes Reinecke
>
> Enable the use of reserved commands, and drop the hand-crafted
> command allocation.
>
> Signed-off-by: Hannes Reinecke
> ---
> drivers/scsi/hpsa.c | 147 ++--
>
On Wed, Mar 11, 2020 at 07:55:46AM +0100, Hannes Reinecke wrote:
> On 3/11/20 12:08 AM, Ming Lei wrote:
> > On Wed, Mar 11, 2020 at 12:25:27AM +0800, John Garry wrote:
> >> From: Hannes Reinecke
> >>
> >> Add a new field 'nr_reserved_cmds' to the SCSI host
On Wed, Mar 11, 2020 at 12:25:27AM +0800, John Garry wrote:
> From: Hannes Reinecke
>
> Add a new field 'nr_reserved_cmds' to the SCSI host template to
> instruct the block layer to set aside a tag space for reserved
> commands.
>
> Signed-off-by: Hannes Reinecke
> ---
>
On Tue, Feb 18, 2020 at 8:35 PM Halil Pasic wrote:
>
> On Tue, 18 Feb 2020 10:21:18 +0800
> Ming Lei wrote:
>
> > On Thu, Feb 13, 2020 at 8:38 PM Halil Pasic wrote:
> > >
> > > Since nobody else is going to restart our hw_queue for us, the
On Thu, Feb 13, 2020 at 8:38 PM Halil Pasic wrote:
>
> Since nobody else is going to restart our hw_queue for us, the
> blk_mq_start_stopped_hw_queues() is in virtblk_done() is not sufficient
> necessarily sufficient to ensure that the queue will get started again.
> In case of global resource
> create block devices with 64k block size.
The patch looks fine, and other drivers(loop, nbd, virtio_blk, ...) allow
user to pass customized logical block size, and the passed size can be > 32k.
Reviewed-by: Ming Lei
Thanks,
Ming
--
dm-devel mailing list
dm-devel@redhat.com
https:/
On Mon, Sep 09, 2019 at 08:10:07PM -0700, Sagi Grimberg wrote:
> Hey Ming,
>
> > > > Ok, so the real problem is per-cpu bounded tasks.
> > > >
> > > > I share Thomas opinion about a NAPI like approach.
> > >
> > > We already have that, its irq_poll, but it seems that for this
> > > use-case, we
On Thu, Sep 12, 2019 at 11:24:23AM +0200, Gabriel C wrote:
> Am Do., 12. Sept. 2019 um 02:51 Uhr schrieb Ming Lei :
> >
> > On Thu, Sep 12, 2019 at 12:27 AM Gabriel C wrote:
> > >
> > > Hi Christoph,
> > >
> > > I see this was
0 ]---
>
> ...
>
> The patch from Dongli Zhang was rejected the time without any other fix
> or work on this issue I could find.
>
> Are there any plans to fix that or any code to test?
I guess the following patchset may address it:
https://lore.kernel.org/linux-block/20190812134312.16732-1-ming@redhat.com/
Thanks,
Ming Lei
)
->clone_endio(B)#B is original bio of 'C'
->clone_endio(A)#A is original bio of 'B'
'A' can be big enough to make such handreds of nested clone_endio(), then
stack is corrupted.
Cc:
Signed-off-by: Ming Lei
---
drivers/md/dm-raid
On Sat, Sep 07, 2019 at 06:19:20AM +0800, Ming Lei wrote:
> On Fri, Sep 06, 2019 at 05:50:49PM +, Long Li wrote:
> > >Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism
> > >
> > >On Fri, Sep 06, 2019 at 09:48:21AM +0800, Ming Lei wrote:
>
On Fri, Sep 06, 2019 at 11:30:57AM -0700, Sagi Grimberg wrote:
>
> >
> > Ok, so the real problem is per-cpu bounded tasks.
> >
> > I share Thomas opinion about a NAPI like approach.
>
> We already have that, its irq_poll, but it seems that for this
> use-case, we get lower performance for some
On Fri, Sep 06, 2019 at 04:25:55PM -0600, Keith Busch wrote:
> On Sat, Sep 07, 2019 at 06:19:21AM +0800, Ming Lei wrote:
> > On Fri, Sep 06, 2019 at 05:50:49PM +, Long Li wrote:
> > > >Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism
> > >
On Fri, Sep 06, 2019 at 05:50:49PM +, Long Li wrote:
> >Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism
> >
> >On Fri, Sep 06, 2019 at 09:48:21AM +0800, Ming Lei wrote:
> >> When one IRQ flood happens on one CPU:
> >>
> >&g
Hi Daniel,
On Thu, Sep 05, 2019 at 12:37:13PM +0200, Daniel Lezcano wrote:
>
> Hi Ming,
>
> On 05/09/2019 11:06, Ming Lei wrote:
> > On Wed, Sep 04, 2019 at 07:31:48PM +0200, Daniel Lezcano wrote:
> >> Hi,
> >>
> >> On 04/09/2019 19:07, Bart Van A
7 ++
> block/genhd.c | 9 +++
> block/mq-deadline.c | 1 +
> drivers/block/null_blk_main.c | 2 +
> drivers/md/dm-rq.c| 2 +-
> drivers/scsi/sd_zbc.c | 2 +
> include/linux/blk-mq.h |
On Wed, Sep 04, 2019 at 12:47:13PM -0700, Bart Van Assche wrote:
> On 9/4/19 11:02 AM, Peter Zijlstra wrote:
> > On Wed, Sep 04, 2019 at 10:38:59AM -0700, Bart Van Assche wrote:
> > > I think it is widely known that rdtsc is a relatively slow x86
> > > instruction.
> > > So I expect that using
On Wed, Sep 04, 2019 at 07:31:48PM +0200, Daniel Lezcano wrote:
> Hi,
>
> On 04/09/2019 19:07, Bart Van Assche wrote:
> > On 9/3/19 12:50 AM, Daniel Lezcano wrote:
> >> On 03/09/2019 09:28, Ming Lei wrote:
> >>> On Tue, Sep 03, 2019 at 08
On Thu, Sep 05, 2019 at 01:28:59PM +0900, Damien Le Moal wrote:
> When elevator_init_mq() is called from blk_mq_init_allocated_queue(),
> the only information known about the device is the number of hardware
> queues as the block device scan by the device driver is not completed
> yet. The device
On Wed, Sep 04, 2019 at 05:42:45PM +0900, Damien Le Moal wrote:
> When elevator_init_mq() is called from blk_mq_init_allocated_queue(),
> the only information known about the device is the number of hardware
> queues as the block device scan by the device driver is not completed
> yet. The device
On Tue, Sep 03, 2019 at 09:50:06AM +0200, Daniel Lezcano wrote:
> On 03/09/2019 09:28, Ming Lei wrote:
> > On Tue, Sep 03, 2019 at 08:40:35AM +0200, Daniel Lezcano wrote:
> >> On 03/09/2019 08:31, Ming Lei wrote:
> >>> Hi Daniel,
> >>>
> >>> On
On Tue, Sep 03, 2019 at 10:09:57AM +0200, Thomas Gleixner wrote:
> On Tue, 3 Sep 2019, Ming Lei wrote:
> > Scheduler can do nothing if the CPU is taken completely by handling
> > interrupt & softirq, so seems not a scheduler problem, IMO.
>
> Well, but thinking more a
On Tue, Sep 03, 2019 at 08:40:35AM +0200, Daniel Lezcano wrote:
> On 03/09/2019 08:31, Ming Lei wrote:
> > Hi Daniel,
> >
> > On Tue, Sep 03, 2019 at 07:59:39AM +0200, Daniel Lezcano wrote:
> >>
> >> Hi Ming Lei,
> >>
> >> On 03/09/2019 05:
Hi Daniel,
On Tue, Sep 03, 2019 at 07:59:39AM +0200, Daniel Lezcano wrote:
>
> Hi Ming Lei,
>
> On 03/09/2019 05:30, Ming Lei wrote:
>
> [ ... ]
>
>
> >>> 2) irq/timing doesn't cover softirq
> >>
> >> That's solvable, right?
> >
On Wed, Aug 28, 2019 at 04:07:19PM +0200, Thomas Gleixner wrote:
> On Wed, 28 Aug 2019, Ming Lei wrote:
> > On Wed, Aug 28, 2019 at 01:23:06PM +0200, Thomas Gleixner wrote:
> > > On Wed, 28 Aug 2019, Ming Lei wrote:
> > > > On Wed, Aug 28, 2019 at 01:09:44A
;>
> >>>Cc: Long Li
> >>>Cc: Ingo Molnar ,
> >>>Cc: Peter Zijlstra
> >>>Cc: Keith Busch
> >>>Cc: Jens Axboe
> >>>Cc: Christoph Hellwig
> >>>Cc: Sagi Grimberg
> >>>Cc: John Garry
> &g
On Wed, Aug 28, 2019 at 01:23:06PM +0200, Thomas Gleixner wrote:
> On Wed, 28 Aug 2019, Ming Lei wrote:
> > On Wed, Aug 28, 2019 at 01:09:44AM +0200, Thomas Gleixner wrote:
> > > > > Also how is that supposed to work when sched_clock is jiffies based?
> >
On Wed, Aug 28, 2019 at 01:09:44AM +0200, Thomas Gleixner wrote:
> On Wed, 28 Aug 2019, Ming Lei wrote:
> > On Tue, Aug 27, 2019 at 04:42:02PM +0200, Thomas Gleixner wrote:
> > > On Tue, 27 Aug 2019, Ming Lei wrote:
> > > > +
> > > > + int cpu = raw_
The following commit has been merged into the irq/core branch of tip:
Commit-ID: 101f85b56d03b36418bbf867f67d81710839b0ec
Gitweb:
https://git.kernel.org/tip/101f85b56d03b36418bbf867f67d81710839b0ec
Author:Ming Lei
AuthorDate:Wed, 28 Aug 2019 16:58:15 +08:00
Committer
: Keith Busch
Cc: Jon Derrick
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index d905e844bf3a..4d89ad4fae3b 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -
On Tue, Aug 27, 2019 at 08:10:42AM -0700, Bart Van Assche wrote:
> On 8/27/19 1:53 AM, Ming Lei wrote:
> > If one vector is spread on several CPUs, usually the interrupt is only
> > handled on one of these CPUs.
>
> Is that perhaps a limitation of x86 interrupt handli
On Tue, Aug 27, 2019 at 06:19:00PM +0200, Thomas Gleixner wrote:
> On Tue, 27 Aug 2019, Thomas Gleixner wrote:
> > On Tue, 27 Aug 2019, Ming Lei wrote:
> > > +/*
> > > + * Update average irq interval with the Exponential Weighted Moving
> > > + * Average
On Tue, Aug 27, 2019 at 04:42:02PM +0200, Thomas Gleixner wrote:
> On Tue, 27 Aug 2019, Ming Lei wrote:
> > +/*
> > + * Update average irq interval with the Exponential Weighted Moving
> > + * Average(EWMA)
> > + */
> > +static void irq_update_i
The following commit has been merged into the irq/core branch of tip:
Commit-ID: b1a5a73e64e99faa5f4deef2ae96d7371a0fb5d0
Gitweb:
https://git.kernel.org/tip/b1a5a73e64e99faa5f4deef2ae96d7371a0fb5d0
Author:Ming Lei
AuthorDate:Fri, 16 Aug 2019 10:28:49 +08:00
Committer
The following commit has been merged into the irq/core branch of tip:
Commit-ID: 53c1788b7d7720565214a466afffdc818d8c6e5f
Gitweb:
https://git.kernel.org/tip/53c1788b7d7720565214a466afffdc818d8c6e5f
Author:Ming Lei
AuthorDate:Fri, 16 Aug 2019 10:28:48 +08:00
Committer
On Tue, Aug 27, 2019 at 11:06:20AM +0200, Johannes Thumshirn wrote:
> On 27/08/2019 10:53, Ming Lei wrote:
> [...]
> > + char *devname;
> > + const struct cpumask *mask;
> > + unsigned long irqflags = IRQF_SHARED;
> > + in
Cc: Hannes Reinecke
Cc: linux-n...@lists.infradead.org
Cc: linux-s...@vger.kernel.org
Signed-off-by: Ming Lei
---
include/linux/interrupt.h | 6 ++
kernel/irq/handle.c | 6 +-
kernel/irq/manage.c | 12
3 files changed, 23 insertions(+), 1 deletion(-)
diff
: Jens Axboe
Cc: Christoph Hellwig
Cc: Sagi Grimberg
Cc: John Garry
Cc: Thomas Gleixner
Cc: Hannes Reinecke
Cc: linux-n...@lists.infradead.org
Cc: linux-s...@vger.kernel.org
Signed-off-by: Ming Lei
---
kernel/irq/manage.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff
...@vger.kernel.org
Signed-off-by: Ming Lei
---
drivers/nvme/host/pci.c | 17 +++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 45a80b708ef4..0b8d49470230 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme
average interrupt interval.
Cc: Long Li
Cc: Ingo Molnar ,
Cc: Peter Zijlstra
Cc: Keith Busch
Cc: Jens Axboe
Cc: Christoph Hellwig
Cc: Sagi Grimberg
Cc: John Garry
Cc: Thomas Gleixner
Cc: Hannes Reinecke
Cc: linux-n...@lists.infradead.org
Cc: linux-s...@vger.kernel.org
Signed-off-by: Ming Lei
patch uses irq's affinity in case of IRQF_RESCUE_THREAD.
Please review & comment!
Long, please test and see if your issue can be fixed.
Ming Lei (4):
softirq: implement IRQ flood detection mechanism
genirq: add IRQF_RESCUE_THREAD
nvme: pci: pass IRQF_RESCURE_THREAD to request_threaded
On Mon, Aug 19, 2019 at 08:49:35PM +0800, Ming Lei wrote:
> Hi Thomas,
>
> The 1st patch makes __irq_build_affinity_masks() more reliable, such as,
> all nodes can be covered in the spread.
>
> The 2nd patch spread vectors on node according to the ratio of this node's
>
On Tue, Aug 20, 2019 at 10:33:38AM -0700, Sagi Grimberg wrote:
>
> > From: Long Li
> >
> > When a NVMe hardware queue is mapped to several CPU queues, it is possible
> > that the CPU this hardware queue is bound to is flooded by returning I/O for
> > other CPUs.
> >
> > For example, consider
>mq_map[cpu] = cpu;
> + }
Block layer provides the helper of blk_mq_map_queues(), so suggest you to use
the default cpu mapping, instead of inventing a new one.
thanks,
Ming Lei
On Mon, Aug 19, 2019 at 04:02:21PM +0200, Thomas Gleixner wrote:
> On Mon, 19 Aug 2019, Ming Lei wrote:
> > On Mon, Aug 19, 2019 at 03:13:58PM +0200, Thomas Gleixner wrote:
> > > On Mon, 19 Aug 2019, Ming Lei wrote:
> > >
> > > > Cc: Jon Derrick
> >
On Thu, Aug 22, 2019 at 10:00 AM Keith Busch wrote:
>
> On Wed, Aug 21, 2019 at 7:34 PM Ming Lei wrote:
> > On Wed, Aug 21, 2019 at 04:27:00PM +, Long Li wrote:
> > > Here is the command to benchmark it:
> > >
> > > fio --bs=4k --ioengine=libaio --iod
On Wed, Aug 21, 2019 at 04:27:00PM +, Long Li wrote:
> >>>Subject: Re: [PATCH 0/3] fix interrupt swamp in NVMe
> >>>
> >>>On Wed, Aug 21, 2019 at 07:47:44AM +, Long Li wrote:
> >>>> >>>Subject: Re: [PATCH 0/3] fix interrupt swam
On Wed, Aug 21, 2019 at 07:47:44AM +, Long Li wrote:
> >>>Subject: Re: [PATCH 0/3] fix interrupt swamp in NVMe
> >>>
> >>>On 20/08/2019 09:25, Ming Lei wrote:
> >>>> On Tue, Aug 20, 2019 at 2:14 PM wrote:
> >>>>>
> &g
601 - 700 of 12343 matches
Mail list logo