On 08/22/2017 11:55 PM, Bart Van Assche wrote:
> On Sat, 2017-08-05 at 14:56 +0800, Ming Lei wrote:
>> +/*
>> + * if there is q->queue_depth, all hw queues share
>> + * this queue depth limit
>> + */
>> +if (q->queue_depth) {
>> +queue_for_each_hw_ctx(q, hctx, i)
>>
On 08/22/2017 07:32 PM, Omar Sandoval wrote:
> From: Omar Sandoval
>
> max_discard_sectors and max_write_zeroes_sectors are in 512 byte
> sectors, not device sectors.
>
> Fixes: f2c6df7dbf9a ("loop: support 4k physical blocksize")
> Signed-off-by: Omar Sandoval
> ---
> drivers/block/loop.c | 8
On Wed, Aug 23, 2017 at 2:10 AM, Akinobu Mita wrote:
> 2017-08-23 8:00 GMT+09:00 Bart Van Assche :
>> Certain faults should be injected independent of the context
>> in which these occur. Commit e41d58185f14 made it impossible to
>> inject faults independent of their context. Restore support for
>
On (08/23/17 13:35), Boqun Feng wrote:
> > KERN_CONT and "\n" should not be together. "\n" flushes the cont
> > buffer immediately.
> >
>
> Hmm.. Not quite familiar with printk() stuffs, but I could see several
> usages of printk(KERN_CONT "...\n") in kernel.
>
> Did a bit research myself, and I
On Wed, Aug 23, 2017 at 12:38:13PM +0800, Boqun Feng wrote:
> From: Boqun Feng
> Date: Wed, 23 Aug 2017 12:12:16 +0800
> Subject: [PATCH] lockdep: Print proper scenario if cross deadlock detected at
> acquisition time
>
> For a potential deadlock about CROSSRELEASE as follow:
>
> P1
On (08/23/17 13:35), Boqun Feng wrote:
[..]
> > > printk(KERN_CONT ");\n");
> >
> > KERN_CONT and "\n" should not be together. "\n" flushes the cont
> > buffer immediately.
> >
>
> Hmm.. Not quite familiar with printk() stuffs, but I could see several
> usages of printk(KERN_CONT "...\
On Wed, Aug 23, 2017 at 01:46:48PM +0900, Sergey Senozhatsky wrote:
> On (08/23/17 12:38), Boqun Feng wrote:
> [..]
> > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> > index 642fb5362507..a3709e15f609 100644
> > --- a/kernel/locking/lockdep.c
> > +++ b/kernel/locking/lockdep.c
t Van Assche wrote:
> > > > On Tue, 2017-08-22 at 19:47 +0900, Sergey Senozhatsky wrote:
> > > > > ==
> > > > > WARNING: possible circular locking dependency detected
> > > > &
On (08/23/17 12:38), Boqun Feng wrote:
[..]
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index 642fb5362507..a3709e15f609 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -1156,6 +1156,23 @@ print_circular_lock_scenario(struct held_lock *src,
>
Sergey Senozhatsky wrote:
> > > > ==
> > > > WARNING: possible circular locking dependency detected
> > > > 4.13.0-rc6-next-20170822-dbg-00020-g39758ed8aae0-dirty #1746 Not tainted
> > > > --
Sergey Senozhatsky wrote:
> > > > ==
> > > > WARNING: possible circular locking dependency detected
> > > > 4.13.0-rc6-next-20170822-dbg-00020-g39758ed8aae0-dirty #1746 Not tainted
> > > > --
> > WARNING: possible circular locking dependency detected
> > > 4.13.0-rc6-next-20170822-dbg-00020-g39758ed8aae0-dirty #1746 Not tainted
> > > --
> > > fsck.ext4/148 is trying to acquire lock:
> >
On Wed, Aug 9, 2017 at 9:44 AM, Ming Lei wrote:
> Hi David,
>
> On Wed, Aug 9, 2017 at 2:13 AM, David Jeffery wrote:
>> On 08/07/2017 07:53 PM, Ming Lei wrote:
>>> On Tue, Aug 8, 2017 at 3:38 AM, David Jeffery wrote:
>>
Signed-off-by: David Jeffery
---
block/blk-sysfs.c |
On Wed, Aug 23, 2017 at 11:36:49AM +0900, Sergey Senozhatsky wrote:
> On (08/23/17 09:03), Byungchul Park wrote:
> [..]
>
> aha, ok
>
> > The report is talking about the following lockup:
> >
> > A work in a worker A task work on exit to user
> > --
On (08/23/17 09:03), Byungchul Park wrote:
[..]
aha, ok
> The report is talking about the following lockup:
>
> A work in a worker A task work on exit to user
> -- ---
> mutex_lock(&bdev->bd_mutex)
>
2017-08-23 8:00 GMT+09:00 Bart Van Assche :
> Certain faults should be injected independent of the context
> in which these occur. Commit e41d58185f14 made it impossible to
> inject faults independent of their context. Restore support for
> task-independent fault injection by adding the attribute '
On Tue, Aug 22, 2017 at 09:43:56PM +, Bart Van Assche wrote:
> On Tue, 2017-08-22 at 19:47 +0900, Sergey Senozhatsky wrote:
> > ==
> > WARNING: possible circular locking dependency detected
> > 4.13.0-rc6-next-20170822-db
Commit e41d58185f14 made all faults that are triggered from task
context, including I/O timeouts, dependent on the failure
injection settings for that task. Make it again possible to inject
I/O timeout failures independent of task context. An example for a
fault injection for which this patch makes
Certain faults should be injected independent of the context
in which these occur. Commit e41d58185f14 made it impossible to
inject faults independent of their context. Restore support for
task-independent fault injection by adding the attribute 'global'.
References: commit e41d58185f14 ("fault-in
Hello Andrew,
A recent change in the fault injection code introduced an undesired change
in the behavior of the I/O timeout failure injection code. This series
restores the original behavior. Please consider these patches for kernel
v4.14.
Thanks,
Bart.
Changes compared to v1:
- Fixed build wit
On Sat, 2017-08-05 at 14:56 +0800, Ming Lei wrote:
> + /*
> + * if there is q->queue_depth, all hw queues share
> + * this queue depth limit
> + */
> + if (q->queue_depth) {
> + queue_for_each_hw_ctx(q, hctx, i)
> + hctx->flags |= BLK_MQ_F_SHAR
On Tue, 2017-08-22 at 19:47 +0900, Sergey Senozhatsky wrote:
> ==
> WARNING: possible circular locking dependency detected
> 4.13.0-rc6-next-20170822-dbg-00020-g39758ed8aae0-dirty #1746 No
On Sat, 2017-08-05 at 14:56 +0800, Ming Lei wrote:
> +static inline void blk_mq_do_dispatch_ctx(struct request_queue *q,
> + struct blk_mq_hw_ctx *hctx)
> +{
> + LIST_HEAD(rq_list);
> + struct blk_mq_ctx *ctx = NULL;
> +
> + do {
> + str
On 8/22/2017 10:12 AM, Paolo Bonzini wrote:
On 20/08/2017 22:56, Paul E. McKenney wrote:
KVM: async_pf: avoid async pf injection when in guest mode
KVM: cpuid: Fix read/write out-of-bounds vulnerability in cpuid emulation
arm: KVM: Allow unaligned accesses at HYP
arm6
On Sat, 2017-08-05 at 14:56 +0800, Ming Lei wrote:
> This patch introduces per-request_queue dispatch
> list for this purpose, and only when all requests
> in this list are dispatched out successfully, we
> can restart to dequeue request from sw/scheduler
> queue and dispath it to lld.
Wasn't one
On Sat, 2017-08-05 at 14:56 +0800, Ming Lei wrote:
> +static inline bool blk_mq_has_dispatch_rqs(struct blk_mq_hw_ctx *hctx)
> +{
> + return !list_empty_careful(&hctx->dispatch);
> +}
> +
> +static inline void blk_mq_add_rq_to_dispatch(struct blk_mq_hw_ctx *hctx,
> + struct request
On Sat, 2017-08-05 at 14:56 +0800, Ming Lei wrote:
> +static inline bool blk_mq_hctx_is_dispatch_busy(struct blk_mq_hw_ctx *hctx)
> +{
> + return test_bit(BLK_MQ_S_DISPATCH_BUSY, &hctx->state);
> +}
> +
> +static inline void blk_mq_hctx_set_dispatch_busy(struct blk_mq_hw_ctx *hctx)
> +{
> +
On 08/22/2017 11:32 AM, Omar Sandoval wrote:
> From: Omar Sandoval
>
> Patches 1, 3, and 4 are the same as v2, plus added reviewed-bys and
> tested-bys from Milan. Patch 2 is new.
>
> Omar Sandoval (4):
> loop: fix hang if LOOP_SET_STATUS gets invalid blocksize or encrypt
> type
> loop:
On 08/21/2017 08:35 AM, sba...@raithlin.com wrote:
> From: Stephen Bates
>
> Hybrid polling currently uses half the average completion time as an
> estimate of how long to poll for. We can improve upon this by noting
> that polling before the minimum completion time makes no sense. Add a
> sysfs
On Sat, 2017-08-05 at 14:56 +0800, Ming Lei wrote:
> SCSI sets q->queue_depth from shost->cmd_per_lun, and
> q->queue_depth is per request_queue and more related to
> scheduler queue compared with hw queue depth, which can be
> shared by queues, such as TAG_SHARED.
>
> This patch trys to use q->qu
On Sat, 2017-08-05 at 14:56 +0800, Ming Lei wrote:
> The following patch will propose some hints to figure out
> default queue depth for scheduler queue, so introduce helper
> of blk_mq_sched_queue_depth() for this purpose.
Reviewed-by: Bart Van Assche
On Sat, 2017-08-05 at 14:56 +0800, Ming Lei wrote:
> + /*
> + * Wherever DISPATCH_BUSY is set, blk_mq_run_hw_queue()
> + * will be run to try to make progress, so it is always
> + * safe to check the state here.
> + */
> + if (test_bit(BLK_MQ_S_DISPATCH_BUSY, &hctx->stat
On Sat, 2017-08-05 at 14:56 +0800, Ming Lei wrote:
> easy to cause queue busy becasue of the small
^^^
because?
> -static void blk_mq_do_dispatch(struct request_queue *q,
> -struct elevator_queue *e,
> -
On Sat, 2017-08-05 at 14:56 +0800, Ming Lei wrote:
> So that it becomes easy to support to dispatch from
> sw queue in the following patch.
Reviewed-by: Bart Van Assche
On Sat, 2017-08-05 at 14:56 +0800, Ming Lei wrote:
> More importantly, for some SCSI devices, driver
> tags are host wide, and the number is quite big,
> but each lun has very limited queue depth.
This may be the case but is not always the case. Another important use-case
is one LUN per host and w
On Sat, 2017-08-05 at 14:56 +0800, Ming Lei wrote:
> -static inline void sbitmap_for_each_set(struct sbitmap *sb, sb_for_each_fn
> fn,
> - void *data)
> +static inline void __sbitmap_for_each_set(struct sbitmap *sb,
> + unsi
On Sat, 2017-08-05 at 14:56 +0800, Ming Lei wrote:
> /**
> * sbitmap_for_each_set() - Iterate over each set bit in a &struct sbitmap.
> + * @off: Offset to iterate from
> * @sb: Bitmap to iterate over.
> * @fn: Callback. Should return true to continue or false to break early.
> * @data: Poi
From: Omar Sandoval
There's no reason to track this separately; just use the
logical_block_size queue limit. This also fixes an issue where the
physical block size would get changed unnecessarily.
Tested-by: Milan Broz
Reviewed-by: Hannes Reinecke
Signed-off-by: Omar Sandoval
---
drivers/blo
From: Omar Sandoval
When I was writing a test for the new loop device block size
functionality, I noticed a couple of issues with how LOOP_GET_STATUS
handles the block size:
- lo_init[0] is never filled in with the logical block size we
previously set
- lo_flags returned from LOOP_GET_STATUS w
From: Omar Sandoval
In both of these error cases, we need to make sure to unfreeze the queue
before we return.
Fixes: ecdd09597a57 ("block/loop: fix race between I/O and set_status")
Fixes: f2c6df7dbf9a ("loop: support 4k physical blocksize")
Tested-by: Milan Broz
Reviewed-by: Milan Broz
Revie
From: Omar Sandoval
max_discard_sectors and max_write_zeroes_sectors are in 512 byte
sectors, not device sectors.
Fixes: f2c6df7dbf9a ("loop: support 4k physical blocksize")
Signed-off-by: Omar Sandoval
---
drivers/block/loop.c | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
dif
From: Omar Sandoval
Patches 1, 3, and 4 are the same as v2, plus added reviewed-bys and
tested-bys from Milan. Patch 2 is new.
Omar Sandoval (4):
loop: fix hang if LOOP_SET_STATUS gets invalid blocksize or encrypt
type
loop: set discard and write zeroes limits in 512 byte sectors
loop:
Hello, Michael.
On Tue, Aug 22, 2017 at 11:41:41AM +1000, Michael Ellerman wrote:
> > This is something powerpc needs to fix.
>
> There is no way for us to fix it.
I don't think that's true. The CPU id used in kernel doesn't have to
match the physical one and arch code should be able to pre-map
Commit e41d58185f14 made all faults that are triggered from task
context, including I/O timeouts, dependent on the failure
injection settings for that task. Make it again possible to inject
I/O timeout failures independent of task context. An example for a
fault injection for which this patch makes
Certain faults should be injected independent of the context
in which these occur. Commit e41d58185f14 made it impossible to
inject faults independent of their context. Restore support for
task-independent fault injection by adding the attribute 'global'.
References: commit e41d58185f14 ("fault-in
Hello Andrew,
A recent change in the fault injection code introduced an undesired change
in the behavior of the I/O timeout failure injection code. This series
restores the original behavior. Please consider these patches for kernel
v4.14.
Thanks,
Bart.
Bart Van Assche (2):
fault-inject: Rest
All support is already there in the generic code, we just need to wire
it up.
Signed-off-by: Christoph Hellwig
---
fs/block_dev.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 9941dc8342df..ea21d18d8e79 100644
--- a/fs/block_dev.c
+++ b/fs/block_d
From: Milosz Tanski
Allow generic_file_buffered_read to bail out early instead of waiting for
the page lock or reading a page if IOCB_NOWAIT is specified.
Signed-off-by: Milosz Tanski
Reviewed-by: Christoph Hellwig
Reviewed-by: Jeff Moyer
Acked-by: Sage Weil
---
mm/filemap.c | 15 ++
This is based on the old idea and code from Milosz Tanski. With the aio
nowait code it becomes mostly trivial now. Buffered writes continue to
return -EOPNOTSUPP if RWF_NOWAIT is passed.
Signed-off-by: Christoph Hellwig
---
fs/aio.c | 6 --
fs/btrfs/file.c| 6 +-
fs/ext
This series resurrects the old patches from Milosz to implement
non-blocking buffered reads. Thanks to the non-blocking AIO code from
Goldwyn the implementation becomes pretty much trivial.
I've also forward ported the test Milosz sent for recent xfsprogs to
verify that this series works properly
And rename it to the more descriptive generic_file_buffered_read while
at it.
Signed-off-by: Christoph Hellwig
---
mm/filemap.c | 15 ---
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index a49702445ce0..4bcfa74ad802 100644
--- a/mm/filemap
On 20/08/2017 22:56, Paul E. McKenney wrote:
>> KVM: async_pf: avoid async pf injection when in guest mode
>> KVM: cpuid: Fix read/write out-of-bounds vulnerability in cpuid
>> emulation
>> arm: KVM: Allow unaligned accesses at HYP
>> arm64: KVM: Allow unaligned accesses at
Hi.
v4.12.8 kernel hangs in I/O path after resuming from suspend-to-ram. I have
blk-mq enabled, tried both BFQ and mq-deadline schedulers with the same
result. Soft lockup happens showing stacktraces I'm pasting below.
Stacktrace shows that I/O hangs in md_super_wait(), and it means it waits fo
On 08/18/2017 09:27 PM, Omar Sandoval wrote:
> From: Omar Sandoval
>
> Patches 1 and 3 are from the original series.
>
> Patch 2 gets rid of the redundant struct loop_device.lo_logical_blocksize
> in favor of using the queue's own logical_block_size. Karel, I decided
> against adding another sys
Hello,
==
WARNING: possible circular locking dependency detected
4.13.0-rc6-next-20170822-dbg-00020-g39758ed8aae0-dirty #1746 Not tainted
--
fsck.ext4/148 is trying to acquire lock:
(&
55 matches
Mail list logo