(This report on Bugzilla): https://bugzilla.kernel.org/show_bug.cgi?id=197303
On ARM board Odroid U3, with root fs on eMMC with 4.14rc4 got this:
blk_partition_remap: fail for partition 3
[52311.638650] EXT4-fs error (device mmcblk1p3): ext4_find_entry:1431: inode
#20381: comm kworker/u8:0: read
On Mon, Oct 30, 2017 at 08:24:57PM +, Bart Van Assche wrote:
> On Fri, 2017-10-27 at 13:38 +0800, Ming Lei wrote:
> > On Fri, Oct 27, 2017 at 04:53:18AM +, Bart Van Assche wrote:
> > > On Fri, 2017-10-27 at 12:43 +0800, Ming Lei wrote:
> > > > The 1st patch removes the RESTART for TAG-SHARE
Convert blk_get_request(q, op, __GFP_RECLAIM) into
blk_get_request_flags(q, op, BLK_MQ_PREEMPT). This patch does not
change any functionality.
Signed-off-by: Bart Van Assche
Tested-by: Martin Steigerwald
Acked-by: David S. Miller [ for IDE ]
Acked-by: Martin K. Petersen
Reviewed-by: Hannes Rei
This flag will be used in the next patch to let the block layer
core know whether or not a SCSI request queue has been quiesced.
A quiesced SCSI queue namely only processes RQF_PREEMPT requests.
Signed-off-by: Bart Van Assche
Reviewed-by: Hannes Reinecke
Tested-by: Martin Steigerwald
Cc: Ming L
Set RQF_PREEMPT if BLK_MQ_REQ_PREEMPT is passed to
blk_get_request_flags().
Signed-off-by: Bart Van Assche
Reviewed-by: Hannes Reinecke
Tested-by: Martin Steigerwald
Cc: Christoph Hellwig
Cc: Ming Lei
Cc: Johannes Thumshirn
---
block/blk-core.c | 4 +++-
block/blk-mq.c | 2 ++
Several block layer and NVMe core functions accept a combination
of BLK_MQ_REQ_* flags through the 'flags' argument but there is
no verification at compile time whether the right type of block
layer flags is passed. Make it possible for sparse to verify this.
This patch does not change any function
The contexts from which a SCSI device can be quiesced or resumed are:
* Writing into /sys/class/scsi_device/*/device/state.
* SCSI parallel (SPI) domain validation.
* The SCSI device power management methods. See also scsi_bus_pm_ops.
It is essential during suspend and resume that neither the file
From: Ming Lei
This patch makes it possible to pause request allocation for
the legacy block layer by calling blk_mq_freeze_queue() and
blk_mq_unfreeze_queue().
Signed-off-by: Ming Lei
[ bvanassche: Combined two patches into one, edited a comment and made sure
REQ_NOWAIT is handled properly i
A side effect of this patch is that the GFP mask that is passed to
several allocation functions in the legacy block layer is changed
from GFP_KERNEL into __GFP_DIRECT_RECLAIM.
Signed-off-by: Bart Van Assche
Reviewed-by: Hannes Reinecke
Tested-by: Martin Steigerwald
Cc: Christoph Hellwig
Cc: Mi
Hello Jens,
It is known that during the resume following a hibernate, especially when
using an md RAID1 array created on top of SCSI devices, sometimes the system
hangs instead of coming up properly. This patch series fixes that
problem. These patches have been tested on top of the block layer for
On Mon, Oct 30 2017, Michael Lyle wrote:
> Hi Jens--
>
> I have a few last patches for bcache targeting 4.15 if it's
> possible to get them in. I'm sorry this is a bit late.
>
> All are reviewed and have received a moderate amount of test
> in my environment (and I'm continuing testing).
>
> [P
From: Elena Reshetova
atomic_t variables are currently used to implement reference
counters with the following properties:
- counter is initialized to 1 using atomic_set()
- a resource is freed upon counter reaching zero
- once counter reaches zero, its further
increments aren't allowed
-
From: Tang Junhui
bucket_in_use is updated in gc thread which triggered by invalidating or
writing sectors_to_gc dirty data, It's a long interval. Therefore, when we
use it to compare with the threshold, it is often not timely, which leads
to inaccurate judgment and often results in bucket deplet
Hi Jens--
I have a few last patches for bcache targeting 4.15 if it's
possible to get them in. I'm sorry this is a bit late.
All are reviewed and have received a moderate amount of test
in my environment (and I'm continuing testing).
[PATCH 1/5] bcache: only permit to recovery read error when c
From: "tang.junhui"
Currently, Cache missed IOs are identified by s->cache_miss, but actually,
there are many situations that missed IOs are not assigned a value for
s->cache_miss in cached_dev_cache_miss(), for example, a bypassed IO
(s->iop.bypass = 1), or the cache_bio allocate failed. In thes
From: Coly Li
When bcache does read I/Os, for example in writeback or writethrough mode,
if a read request on cache device is failed, bcache will try to recovery
the request by reading from cached device. If the data on cached device is
not synced with cache device, then requester will get a stal
From: Liang Chen
mutex_destroy does nothing most of time, but it's better to call
it to make the code future proof and it also has some meaning
for like mutex debug.
As Coly pointed out in a previous review, bcache_exit() may not be
able to handle all the references properly if userspace registe
On 10/30/2017 03:37 PM, Bart Van Assche wrote:
> On Wed, 2017-10-18 at 15:57 -0500, Brian King wrote:
>> On 10/17/2017 01:19 AM, Hannes Reinecke wrote:
>>> On 10/17/2017 12:49 AM, Bart Van Assche wrote:
[ ... ]
>>>
>>> Not sure if this is a valid conversion.
>>> Originally the driver would all
On Wed, 2017-10-18 at 15:57 -0500, Brian King wrote:
> On 10/17/2017 01:19 AM, Hannes Reinecke wrote:
> > On 10/17/2017 12:49 AM, Bart Van Assche wrote:
> > > [ ... ]
> >
> > Not sure if this is a valid conversion.
> > Originally the driver would allocate a single buffer; with this buffer
> > we h
On Fri, 2017-10-27 at 19:55 +0200, Roman Penyaev wrote:
> That's just a bug in code, not a in issue with restarts, which can be fixed
> if we put hctx which are needed to be restarted in percpu lists and avoid
> long loops and contentions.
Hello Roman,
Have you noticed that recently .get_budget()
On Fri, 2017-10-27 at 13:38 +0800, Ming Lei wrote:
> On Fri, Oct 27, 2017 at 04:53:18AM +, Bart Van Assche wrote:
> > On Fri, 2017-10-27 at 12:43 +0800, Ming Lei wrote:
> > > The 1st patch removes the RESTART for TAG-SHARED because SCSI handles it
> > > by itself, and not necessary to waste CPU
On 10/30/2017 12:37 PM, Bart Van Assche wrote:
> On Mon, 2017-10-30 at 12:16 -0600, Jens Axboe wrote:
>> On 10/19/2017 11:00 AM, Bart Van Assche wrote:
>>> Make sure that if the timeout timer fires after a queue has been
>>> marked "dying" that the affected requests are finished.
>>>
>>> Reported-b
On Mon, 2017-10-30 at 12:16 -0600, Jens Axboe wrote:
> On 10/19/2017 11:00 AM, Bart Van Assche wrote:
> > Make sure that if the timeout timer fires after a queue has been
> > marked "dying" that the affected requests are finished.
> >
> > Reported-by: chenxiang (M)
> > Fixes: commit 287922eb0b18
On 10/19/2017 11:00 AM, Bart Van Assche wrote:
> Make sure that if the timeout timer fires after a queue has been
> marked "dying" that the affected requests are finished.
>
> Reported-by: chenxiang (M)
> Fixes: commit 287922eb0b18 ("block: defer timeouts to a workqueue")
> Signed-off-by: Bart Va
On Thu, 2017-10-19 at 10:00 -0700, Bart Van Assche wrote:
> Make sure that if the timeout timer fires after a queue has been
> marked "dying" that the affected requests are finished.
>
> Reported-by: chenxiang (M)
> Fixes: commit 287922eb0b18 ("block: defer timeouts to a workqueue")
(replying to
Instead of referring from inside drivers/cdrom/Makefile to all the
drivers that use this driver, let these drivers select the cdrom
driver. This change makes the cdrom build code follow the approach
that is used for most other drivers, namely refer from the higher
layers to the lower layer instead
On 30.10.2017 15:55, Tejun Heo wrote:
> On Sun, Oct 29, 2017 at 05:36:53PM +0100, Maciej S. Szmigiero wrote:
>> CFQ scheduler has a property that processes (or tasks in cgroups v1) that
>> aren't assigned to any particular cgroup - that is, which stay in the root
>> cgroup - effectively form an imp
On Sun, Oct 29, 2017 at 05:36:53PM +0100, Maciej S. Szmigiero wrote:
> CFQ scheduler has a property that processes (or tasks in cgroups v1) that
> aren't assigned to any particular cgroup - that is, which stay in the root
> cgroup - effectively form an implicit leaf child node attached to the root
On Fri 27-10-17 01:36:42, weiping zhang wrote:
> device_add_disk need do more safety error handle, so this patch just
> add WARN_ON.
>
> Signed-off-by: weiping zhang
> ---
> block/genhd.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/block/genhd.c b/block/genhd.c
>
On Fri 27-10-17 01:36:14, weiping zhang wrote:
> In order to make error handle more cleaner we call bdi_debug_register
> before set state to WB_registered, that we can avoid call bdi_unregister
> in release_bdi().
>
> Signed-off-by: weiping zhang
> ---
> mm/backing-dev.c | 7 ++-
> 1 file ch
On Fri 27-10-17 01:35:36, weiping zhang wrote:
> this patch add a check for bdi_debug_root and do error handle for it.
> we should make sure it was created success, otherwise when add new
> block device's bdi folder(eg, 8:0) will be create a debugfs root directory.
>
> Signed-off-by: weiping zhang
On Fri 27-10-17 01:35:57, weiping zhang wrote:
> Convert bdi_debug_register to int and then do error handle for it.
>
> Signed-off-by: weiping zhang
This patch looks good to me. You can add:
Reviewed-by: Jan Kara
Honza
> ---
>
Tejun Heo wrote:
> > The blkg obtained through a blkg_lookup, in a rcu_read section, is
> > protected. But, outside that section, a pointer to that blkg is not
> > guaranteed to be valid any longer. Stat-update functions seem safe in
>
> blkg's destruction is rcu delayed. If you have access t
33 matches
Mail list logo