On 21/11/17 17:39, Ulf Hansson wrote:
> On 21 November 2017 at 14:42, Adrian Hunter wrote:
>> card_busy_detect() has a 10 minute timeout. However the correct timeout is
>> the data timeout. Change card_busy_detect() to use the data timeout.
>
> Unfortunate I don't think
From: Tang Junhui
Hi, Mike
Thanks for your reminder. I'll checkpatch carefully next time.
Thanks,
Tang
Jens, please don't just revert the commit in your for-linus tree.
On its own this will totally mess up the interrupt assignments. Give
me a bit of time to sort this out properly.
Hi Tang Junhui--
Thank you.
On 11/21/2017 10:20 PM, tang.jun...@zte.com.cn wrote:
> From: Tang Junhui
>
> Currently, when a cached device detaching from cache, writeback thread is
> not stopped, and writeback_rate_update work is not canceled. For example,
> after bellow
On 11/22/2017 06:11 AM, Ming Lei wrote:
> Now we track legacy requests with .q_usage_counter in commit 055f6e18e08f
> ("block: Make q_usage_counter also track legacy requests"), but that
> commit never runs and drains legacy queue before waiting for this counter
> becoming zero, then IO hang is
From: Tang Junhui
Currently, when a cached device detaching from cache, writeback thread is
not stopped, and writeback_rate_update work is not canceled. For example,
after bellow command:
echo 1 >/sys/block/sdb/bcache/detach
you can still see the writeback thread. Then
Hi, Michael
Thanks for your reminder. I'll checkpatch carefully next time.
Thanks
2017-11-22 13:13 GMT+08:00 Michael Lyle :
> Reviewed-by: Michael Lyle
>
> Please note though, that your patches have all tabs expanded to spaces,
> which makes them difficult to
From: Tang Junhui
In such scenario that there are some flash only volumes
, and some cached devices, when many tasks request these devices in
writeback mode, the write IOs may fall to the same bucket as bellow:
| cached data | flash data | cached data | cached data| flash
From: Tang Junhui
Hello Coly, Mike
> > If the change can be inside bch_register_lock, it would (just) be more
> > comfortable. The code is correct, because attach/detach sysfs is created
> > after writeback_thread created and writeback_rate_update worker
> > initialized,
From: Tang Junhui
Hello Mike
> Thanks, this looks much better. Can you please fix the whitespace
> issues so it gets through checkpatch cleanly?
OK, I'll resend a patch later.
Thanks,
Tang
Tang Junhui--
Thanks for noticing this issue.
On Wed, Nov 1, 2017 at 4:55 AM, Coly Li wrote:
> On 2017/10/31 下午4:14, tang.jun...@zte.com.cn wrote:
>> From: Tang Junhui
>>
>> Currently, when a cached device detaching from cache, writeback thread is
>> not
Reviewed-by: Michael Lyle
Please note though, that your patches have all tabs expanded to spaces,
which makes them difficult to apply. I have fixed things these times
but please try to submit them in correct form in the future.
Thanks,
Mike
On 11/17/2017 04:57 PM, Michael
Now we track legacy requests with .q_usage_counter in commit 055f6e18e08f
("block: Make q_usage_counter also track legacy requests"), but that
commit never runs and drains legacy queue before waiting for this counter
becoming zero, then IO hang is caused in the test of pulling disk during IO.
Rui Hua--
Thank you for fixing this.
On 11/21/2017 05:58 AM, Rui Hua wrote:
> The read request might meet error when searching the btree, but the error was
> not handled in cache_lookup(), and this kind of metadata failure will not go
> into cached_dev_read_error(), finally, the upper layer will
Tang Junhui---
On 11/21/2017 07:25 PM, tang.jun...@zte.com.cn wrote:
> From: Tang Junhui
>
> In such scenario that there are some flash only volumes
> , and some cached devices, when many tasks request these devices in
> writeback mode, the write IOs may fall to the same
On Tue, Nov 21 2017 at 11:00pm -0500,
NeilBrown wrote:
> On Tue, Nov 21 2017, Mikulas Patocka wrote:
>
> > On Tue, 21 Nov 2017, Mike Snitzer wrote:
> >
> >> On Tue, Nov 21 2017 at 4:23pm -0500,
> >> Mikulas Patocka wrote:
> >>
> >> > This is not correct:
On Tue, Nov 21 2017, Mikulas Patocka wrote:
> On Tue, 21 Nov 2017, Mike Snitzer wrote:
>
>> On Tue, Nov 21 2017 at 4:23pm -0500,
>> Mikulas Patocka wrote:
>>
>> > This is not correct:
>> >
>> >2206 static void dm_wq_work(struct work_struct *work)
>> >2207 {
>> >
From: Tang Junhui
In such scenario that there are some flash only volumes
, and some cached devices, when many tasks request these devices in
writeback mode, the write IOs may fall to the same bucket as bellow:
| cached data | flash data | cached data | cached data| flash
On Tue, Nov 21 2017 at 8:21pm -0500,
Mikulas Patocka wrote:
>
>
> On Tue, 21 Nov 2017, Mike Snitzer wrote:
>
> > On Tue, Nov 21 2017 at 4:23pm -0500,
> > Mikulas Patocka wrote:
> >
> > > This is not correct:
> > >
> > >2206 static void
From: Tang Junhui
Hello Coly, Kent
> Correct me if I am wrong. I guess the reason why you care about flash
> only volume is because ceph users use flash only volume to store some
> metadata only on SSD ?
Yes, we store ceph metadata in flash only volume and object data
On Tue, 21 Nov 2017, Mike Snitzer wrote:
> On Tue, Nov 21 2017 at 4:23pm -0500,
> Mikulas Patocka wrote:
>
> > This is not correct:
> >
> >2206 static void dm_wq_work(struct work_struct *work)
> >2207 {
> >2208 struct mapped_device *md =
On Tue, Nov 21 2017, Mikulas Patocka wrote:
> On Tue, 21 Nov 2017, Mike Snitzer wrote:
>
>> On Tue, Nov 21 2017 at 7:43am -0500,
>> Mike Snitzer wrote:
>>
>> > Decided it a better use of my time to review and then hopefully use the
>> > block-core's bio splitting
On Tue, Nov 21 2017 at 4:23pm -0500,
Mikulas Patocka wrote:
>
>
> On Tue, 21 Nov 2017, Mike Snitzer wrote:
>
> > On Tue, Nov 21 2017 at 7:43am -0500,
> > Mike Snitzer wrote:
> >
> > > Decided it a better use of my time to review and then hopefully
On Tue, Nov 21, 2017 at 10:48:27PM +0800, Coly Li wrote:
> On 21/11/2017 6:57 PM, Kent Overstreet wrote:
> > On Tue, Nov 21, 2017 at 06:50:32PM +0800, tang.jun...@zte.com.cn wrote:
> >> From: Tang Junhui
> >>
> >> Currently in pick_data_bucket(), though we keep multiple
On Tue, 21 Nov 2017, Mike Snitzer wrote:
> On Tue, Nov 21 2017 at 7:43am -0500,
> Mike Snitzer wrote:
>
> > Decided it a better use of my time to review and then hopefully use the
> > block-core's bio splitting infrastructure in DM. Been meaning to do
> > that for quite
On 11/21/2017 01:31 PM, Christian Borntraeger wrote:
>
>
> On 11/21/2017 09:21 PM, Jens Axboe wrote:
>> On 11/21/2017 01:19 PM, Christian Borntraeger wrote:
>>>
>>> On 11/21/2017 09:14 PM, Jens Axboe wrote:
On 11/21/2017 01:12 PM, Christian Borntraeger wrote:
>
>
> On 11/21/2017
On 11/21/2017 09:21 PM, Jens Axboe wrote:
> On 11/21/2017 01:19 PM, Christian Borntraeger wrote:
>>
>> On 11/21/2017 09:14 PM, Jens Axboe wrote:
>>> On 11/21/2017 01:12 PM, Christian Borntraeger wrote:
On 11/21/2017 08:30 PM, Jens Axboe wrote:
> On 11/21/2017 12:15 PM,
On 11/21/2017 01:19 PM, Christian Borntraeger wrote:
>
> On 11/21/2017 09:14 PM, Jens Axboe wrote:
>> On 11/21/2017 01:12 PM, Christian Borntraeger wrote:
>>>
>>>
>>> On 11/21/2017 08:30 PM, Jens Axboe wrote:
On 11/21/2017 12:15 PM, Christian Borntraeger wrote:
>
>
> On
On 11/21/2017 09:14 PM, Jens Axboe wrote:
> On 11/21/2017 01:12 PM, Christian Borntraeger wrote:
>>
>>
>> On 11/21/2017 08:30 PM, Jens Axboe wrote:
>>> On 11/21/2017 12:15 PM, Christian Borntraeger wrote:
On 11/21/2017 07:39 PM, Jens Axboe wrote:
> On 11/21/2017 11:27 AM, Jens
On 11/21/2017 01:12 PM, Christian Borntraeger wrote:
>
>
> On 11/21/2017 08:30 PM, Jens Axboe wrote:
>> On 11/21/2017 12:15 PM, Christian Borntraeger wrote:
>>>
>>>
>>> On 11/21/2017 07:39 PM, Jens Axboe wrote:
On 11/21/2017 11:27 AM, Jens Axboe wrote:
> On 11/21/2017 11:12 AM,
On 11/21/2017 08:30 PM, Jens Axboe wrote:
> On 11/21/2017 12:15 PM, Christian Borntraeger wrote:
>>
>>
>> On 11/21/2017 07:39 PM, Jens Axboe wrote:
>>> On 11/21/2017 11:27 AM, Jens Axboe wrote:
On 11/21/2017 11:12 AM, Christian Borntraeger wrote:
>
>
> On 11/21/2017 07:09 PM,
On Tue, Nov 21 2017 at 7:43am -0500,
Mike Snitzer wrote:
> Decided it a better use of my time to review and then hopefully use the
> block-core's bio splitting infrastructure in DM. Been meaning to do
> that for quite a while anyway. This mail from you just made it all
On Tue, Nov 21 2017, Mike Snitzer wrote:
> On Mon, Nov 20 2017 at 8:35pm -0500,
> Mike Snitzer wrote:
>
>> On Mon, Nov 20 2017 at 7:34pm -0500,
>> NeilBrown wrote:
>>
>> > On Mon, Nov 20 2017, Mike Snitzer wrote:
>> >
>> > >
>> > > But I've now queued
On 11/21/2017 12:15 PM, Christian Borntraeger wrote:
>
>
> On 11/21/2017 07:39 PM, Jens Axboe wrote:
>> On 11/21/2017 11:27 AM, Jens Axboe wrote:
>>> On 11/21/2017 11:12 AM, Christian Borntraeger wrote:
On 11/21/2017 07:09 PM, Jens Axboe wrote:
> On 11/21/2017 10:27 AM, Jens
On 11/21/2017 07:39 PM, Jens Axboe wrote:
> On 11/21/2017 11:27 AM, Jens Axboe wrote:
>> On 11/21/2017 11:12 AM, Christian Borntraeger wrote:
>>>
>>>
>>> On 11/21/2017 07:09 PM, Jens Axboe wrote:
On 11/21/2017 10:27 AM, Jens Axboe wrote:
> On 11/21/2017 03:14 AM, Christian Borntraeger
On 11/21/2017 11:27 AM, Jens Axboe wrote:
> On 11/21/2017 11:12 AM, Christian Borntraeger wrote:
>>
>>
>> On 11/21/2017 07:09 PM, Jens Axboe wrote:
>>> On 11/21/2017 10:27 AM, Jens Axboe wrote:
On 11/21/2017 03:14 AM, Christian Borntraeger wrote:
> Bisect points to
>
>
On 11/21/2017 11:12 AM, Christian Borntraeger wrote:
>
>
> On 11/21/2017 07:09 PM, Jens Axboe wrote:
>> On 11/21/2017 10:27 AM, Jens Axboe wrote:
>>> On 11/21/2017 03:14 AM, Christian Borntraeger wrote:
Bisect points to
1b5a7455d345b223d3a4658a9e5fce985b7998c1 is the first bad
On 11/21/2017 07:09 PM, Jens Axboe wrote:
> On 11/21/2017 10:27 AM, Jens Axboe wrote:
>> On 11/21/2017 03:14 AM, Christian Borntraeger wrote:
>>> Bisect points to
>>>
>>> 1b5a7455d345b223d3a4658a9e5fce985b7998c1 is the first bad commit
>>> commit 1b5a7455d345b223d3a4658a9e5fce985b7998c1
>>>
On 11/21/2017 10:27 AM, Jens Axboe wrote:
> On 11/21/2017 03:14 AM, Christian Borntraeger wrote:
>> Bisect points to
>>
>> 1b5a7455d345b223d3a4658a9e5fce985b7998c1 is the first bad commit
>> commit 1b5a7455d345b223d3a4658a9e5fce985b7998c1
>> Author: Christoph Hellwig
>> Date: Mon
On 21 November 2017 at 14:42, Adrian Hunter wrote:
> card_busy_detect() has a 10 minute timeout. However the correct timeout is
> the data timeout. Change card_busy_detect() to use the data timeout.
Unfortunate I don't think there is "correct" timeout for this case.
The
On 21/11/2017 6:57 PM, Kent Overstreet wrote:
> On Tue, Nov 21, 2017 at 06:50:32PM +0800, tang.jun...@zte.com.cn wrote:
>> From: Tang Junhui
>>
>> Currently in pick_data_bucket(), though we keep multiple buckets open
>> for writes, and try to segregate different write
On Mon, Nov 20 2017 at 11:54am -0500,
Mike Snitzer wrote:
> DM appears to be the only block driver that doesn't lean on the block
> core's bio splitting. My hope is to fix that but in the meantime it
> doesn't make sense for a device that doesn't need blk_queue_split() to
>
The read request might meet error when searching the btree, but the error was
not handled in cache_lookup(), and this kind of metadata failure will not go
into cached_dev_read_error(), finally, the upper layer will receive bi_status=0.
In this patch we judge the metadata error by the return value
blk_get_request() can fail, always check the return value.
Fixes: 0493f6fe5bde ("mmc: block: Move boot partition locking into a driver op")
Fixes: 3ecd8cf23f88 ("mmc: block: move multi-ioctl() to use block layer")
Fixes: 614f0388f580 ("mmc: block: move single ioctl() commands to block
requests")
The block driver must be resumed if the mmc bus fails to suspend the card.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/bus.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/mmc/core/bus.c b/drivers/mmc/core/bus.c
index a4b49e25fe96..7586ff2ad1f1
mmc_cleanup_queue() is not used by a different module. Do not export it.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/queue.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 4f33d277b125..26f8da30ebe5
The card is not necessarily being removed, but the debugfs files must be
removed when the driver is removed, otherwise they will continue to exist
after unbinding the card from the driver. e.g.
# echo "mmc1:0001" > /sys/bus/mmc/drivers/mmcblk/unbind
# cat
The card is required to return to transfer state. Since that is the state
required to start another transfer, check for that state instead of
programming state.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 17 +
1 file changed, 13
card_busy_detect() has a 10 minute timeout. However the correct timeout is
the data timeout. Change card_busy_detect() to use the data timeout.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 48
1 file
Make card_busy_detect() accumulate all response error bits. Later patches
will make use of this.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 30 ++
1 file changed, 22 insertions(+), 8 deletions(-)
diff --git
Make mmc_pre_req() and mmc_post_req() available to the card drivers. Later
patches will make use of this.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/core.c | 31 ---
drivers/mmc/core/core.h | 31 +++
2 files
Add error-handling comments to explain what would also be done for blk-mq
if it used the legacy error-handling.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 36 +++-
1 file changed, 35 insertions(+), 1 deletion(-)
diff
From: Venkat Gopalakrishnan
This patch adds CMDQ support for command-queue compatible
hosts.
Command queue is added in eMMC-5.1 specification. This
enables the controller to process upto 32 requests at
a time.
Adrian Hunter contributed renaming to cqhci, recovery,
Until mmc has blk-mq support fully implemented and tested, add a parameter
use_blk_mq, set to true if config option MMC_MQ_DEFAULT is selected, which
it is by default.
Signed-off-by: Adrian Hunter
---
drivers/mmc/Kconfig | 10 ++
drivers/mmc/core/core.c |
Define and use a blk-mq queue. Discards and flushes are processed
synchronously, but reads and writes asynchronously. In order to support
slow DMA unmapping, DMA unmapping is not done until after the next request
is started. That means the request is not completed until then. If there is
no next
Check error bits and save the exception bit when polling card busy.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 38 ++
1 file changed, 30 insertions(+), 8 deletions(-)
diff --git a/drivers/mmc/core/block.c
The block driver's blk-mq paths do not use mmc_start_areq(). In order to
remove mmc_start_areq() entirely, start by removing it from mmc_test.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/mmc_test.c | 122
1 file
There are only a few things the recovery needs to do. Primarily, it just
needs to:
Determine the number of bytes transferred
Get the card back to transfer state
Determine whether to retry
There are also a couple of additional features:
Reset the card before the
Add CQHCI initialization and implement CQHCI operations for Intel GLK.
Signed-off-by: Adrian Hunter
---
drivers/mmc/host/Kconfig | 1 +
drivers/mmc/host/sdhci-pci-core.c | 155 +-
2 files changed, 155 insertions(+), 1
Recovery is simpler to understand if it is only used for errors. Create a
separate function for card polling.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 27 ++-
1 file changed, 26 insertions(+), 1 deletion(-)
diff --git
For blk-mq, add support for completing requests directly in the ->done
callback. That means that error handling and urgent background operations
must be handled by recovery_work in that case.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 102
Remove config option MMC_MQ_DEFAULT and parameter mmc_use_blk_mq, so that
blk-mq must be used always.
Signed-off-by: Adrian Hunter
---
drivers/mmc/Kconfig | 10 --
drivers/mmc/core/core.c | 7 ---
drivers/mmc/core/core.h | 2 --
drivers/mmc/core/host.c
Remove code no longer needed after the switch to blk-mq.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/bus.c | 2 -
drivers/mmc/core/core.c | 185 +--
drivers/mmc/core/core.h | 8 --
include/linux/mmc/host.h | 3
Remove code no longer needed after the switch to blk-mq.
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 706 +--
drivers/mmc/core/block.h | 2 -
drivers/mmc/core/queue.c | 240 +---
Add CQE support to the block driver, including:
- optionally using DCMD for flush requests
- "manually" issuing discard requests
- issuing read / write requests to the CQE
- supporting block-layer timeouts
- handling recovery
- supporting re-tuning
CQE offers 25% - 50%
Ensure blk_get_request() is paired with blk_put_request().
Fixes: 0493f6fe5bde ("mmc: block: Move boot partition locking into a driver op")
Fixes: 627c3ccfb46a ("mmc: debugfs: Move block debugfs into block module")
Signed-off-by: Adrian Hunter
---
Use blk_cleanup_queue() to shutdown the queue when the driver is removed,
and instead get an extra reference to the queue to prevent the queue being
freed before the final mmc_blk_put().
Signed-off-by: Adrian Hunter
---
drivers/mmc/core/block.c | 17 -
Hi
Here is V14 of the hardware command queue patches without the software
command queue patches, now using blk-mq and now with blk-mq support for
non-CQE I/O.
V14 includes a number of fixes to existing code, changes to default to
blk-mq, and adds patches to remove legacy code.
HW CMDQ offers
On Tue, Nov 21 2017 at 7:10am -0500,
Mike Snitzer wrote:
> On Mon, Nov 20 2017 at 8:35pm -0500,
> Mike Snitzer wrote:
>
> > On Mon, Nov 20 2017 at 7:34pm -0500,
> > NeilBrown wrote:
> >
> > > Please see
> > >
On Mon, Nov 20 2017 at 8:35pm -0500,
Mike Snitzer wrote:
> On Mon, Nov 20 2017 at 7:34pm -0500,
> NeilBrown wrote:
>
> > On Mon, Nov 20 2017, Mike Snitzer wrote:
> >
> > >
> > > But I've now queued this patch for once Linus gets back (reverts DM
> > >
From: Tang Junhui
On Tue, Nov 21, 2017 at 06:50:32PM +0800, tang.jun...@zte.com.cn wrote:
> > From: Tang Junhui
> >
> > Currently in pick_data_bucket(), though we keep multiple buckets open
> > for writes, and try to segregate different write
On Tue, Nov 21, 2017 at 06:50:32PM +0800, tang.jun...@zte.com.cn wrote:
> From: Tang Junhui
>
> Currently in pick_data_bucket(), though we keep multiple buckets open
> for writes, and try to segregate different write streams for better
> cache utilization: first we look
From: Tang Junhui
Currently in pick_data_bucket(), though we keep multiple buckets open
for writes, and try to segregate different write streams for better
cache utilization: first we look for a bucket where the last write to
it was sequential with the current write, and
On 11/21/2017 10:50 AM, Christian Borntraeger wrote:
>
>
> On 11/21/2017 09:35 AM, Christian Borntraeger wrote:
>>
>>
>> On 11/20/2017 09:52 PM, Jens Axboe wrote:
>>> On 11/20/2017 01:49 PM, Christian Borntraeger wrote:
On 11/20/2017 08:42 PM, Jens Axboe wrote:
> On
On 11/21/2017 09:35 AM, Christian Borntraeger wrote:
>
>
> On 11/20/2017 09:52 PM, Jens Axboe wrote:
>> On 11/20/2017 01:49 PM, Christian Borntraeger wrote:
>>>
>>>
>>> On 11/20/2017 08:42 PM, Jens Axboe wrote:
On 11/20/2017 12:29 PM, Christian Borntraeger wrote:
>
>
> On
On 11/20/2017 09:52 PM, Jens Axboe wrote:
> On 11/20/2017 01:49 PM, Christian Borntraeger wrote:
>>
>>
>> On 11/20/2017 08:42 PM, Jens Axboe wrote:
>>> On 11/20/2017 12:29 PM, Christian Borntraeger wrote:
On 11/20/2017 08:20 PM, Bart Van Assche wrote:
> On Fri, 2017-11-17 at
76 matches
Mail list logo