On 07/26/2018 03:52 PM, jianchao.wang wrote:
> In addition, .runtime_suspend is invoked under spinlock and irq-disabled.
> So sleep is forbidden here.
> Please refer to rpm_suspend
>
> * This function must be called under dev->power.lock with interrupts disabled
>
__rpm
On 07/26/2018 10:45 AM, jianchao.wang wrote:
> Hi Bart
>
> On 07/26/2018 06:26 AM, Bart Van Assche wrote:
>> @@ -102,9 +109,11 @@ int blk_pre_runtime_suspend(struct request_queue *q)
>> return ret;
>>
>> blk_pm_runtime_lock(q);
>> +
Hi Bart
On 07/26/2018 06:26 AM, Bart Van Assche wrote:
> @@ -102,9 +109,11 @@ int blk_pre_runtime_suspend(struct request_queue *q)
> return ret;
>
> blk_pm_runtime_lock(q);
> + blk_set_preempt_only(q);
We only stop non-RQF_PM request entering when RPM_SUSPENDING and RPM_
Hi Bart
On 07/26/2018 06:26 AM, Bart Van Assche wrote:
> +
> +void blk_pm_runtime_lock(struct request_queue *q)
> +{
> + spin_lock(&q->rpm_lock);
> + wait_event_interruptible_locked(q->rpm_wq,
> + q->rpm_owner == NULL || q->rpm_owner == current);
> + if (q->rpm_ow
On 07/23/2018 11:50 PM, Bart Van Assche wrote:
> The patch below fixes queue stalling when shared hctx marked for restart
> (BLK_MQ_S_SCHED_RESTART bit) but q->shared_hctx_restart stays zero. The
> root cause is that hctxs are shared between queues, but 'shared_hctx_restart'
The blk_mq_hw_ctx
Hi Keith
On 07/19/2018 01:45 AM, Keith Busch wrote:
>>> + list_for_each_entry(q, &set->tag_list, tag_set_list) {
>>> /*
>>> * Request timeouts are handled as a forward rolling timer. If
>>> * we end up here it means that no requests are pending and
>>> @@ -8
On 07/13/2018 06:24 AM, Bart Van Assche wrote:
> Hello Keith,
>
> Before commit 12f5b9314545 ("blk-mq: Remove generation seqeunce"), if a
> request completion was reported after request timeout processing had
> started, completion handling was skipped. The following code in
> blk_mq_complete_req
On 07/13/2018 06:24 AM, Bart Van Assche wrote:
> On Thu, 2018-07-12 at 13:24 -0600, Keith Busch wrote:
>> On Thu, Jul 12, 2018 at 06:16:12PM +, Bart Van Assche wrote:
>>> What prevents that a request finishes and gets reused after the
>>> blk_mq_req_expired() call has finished and before kre
On 06/28/2018 01:42 PM, Kashyap Desai wrote:
> Ming -
>
> Performance drop is resolved on my setup, but may be some stability of the
> kernel is caused due to this patch set. I have not tried without patch
> set, but in case you can figure out if below crash is due to this patch
> set, I can tr
On 06/12/2018 09:01 PM, jianchao.wang wrote:
> Hi ming
>
> Thanks for your kindly response.
>
> On 06/12/2018 06:17 PM, Ming Lei wrote:
>> On Tue, Jun 12, 2018 at 6:04 PM, jianchao.wang
>> wrote:
>>> Hi Jens and Christoph
>>>
>>> In the
Hi ming
Thanks for your kindly response.
On 06/12/2018 06:17 PM, Ming Lei wrote:
> On Tue, Jun 12, 2018 at 6:04 PM, jianchao.wang
> wrote:
>> Hi Jens and Christoph
>>
>> In the recent commit of new blk-mq timeout handling, we don't have any
>> protection
Hi Jens and Christoph
In the recent commit of new blk-mq timeout handling, we don't have any
protection
on timed out request against the completion path. We just hold a request->ref
count,
it just could avoid the request tag to be released and life-recycle, but not
completion.
For the scsi mid
Hi Jens and Holger
Thank for your kindly response.
That's really appreciated.
I will post next version based on Jens' patch.
Thanks
Jianchao
On 05/23/2018 02:32 AM, Holger Hoffstätte wrote:
This looks great but prevents kyber from being built as module,
which is AFAIK supposed to work
Hi Omar
Thanks for your kindly response.
On 05/23/2018 04:02 AM, Omar Sandoval wrote:
> On Tue, May 22, 2018 at 10:48:29PM +0800, Jianchao Wang wrote:
>> Currently, kyber is very unfriendly with merging. kyber depends
>> on ctx rq_list to do merging, however, most of time, it will not
>> leave an
Hi Ming
On 05/15/2018 08:56 PM, Ming Lei wrote:
> Looks a nice fix on nvme_create_queue(), but seems the change on
> adapter_alloc_cq() is missed in above patch.
>
> Could you prepare a formal one so that I may integrate it to V6?
Please refer to
Thanks
Jianchao
>From 9bb6db79901ef303cd40c4c91
Hi ming
On 05/16/2018 10:09 AM, Ming Lei wrote:
> So could you check if only the patch("unquiesce admin queue after shutdown
> controller") can fix your IO hang issue?
I indeed tested this before fix the warning.
It could fix the io hung issue. :)
Thanks
Jianchao
Hi ming
On 05/11/2018 08:29 PM, Ming Lei wrote:
> +static void nvme_eh_done(struct nvme_eh_work *eh_work, int result)
> +{
> + struct nvme_dev *dev = eh_work->dev;
> + bool top_eh;
> +
> + spin_lock(&dev->eh_lock);
> + top_eh = list_is_last(&eh_work->list, &dev->eh_head);
> + d
Hi ming
On 05/15/2018 08:33 AM, Ming Lei wrote:
> We still have to quiesce admin queue before canceling request, so looks
> the following patch is better, so please ignore the above patch and try
> the following one and see if your hang can be addressed:
>
> diff --git a/drivers/nvme/host/pci.c b
Hi ming
On 05/14/2018 05:38 PM, Ming Lei wrote:
>> Here is the deadlock scenario.
>>
>> nvme_eh_work // EH0
>> -> nvme_reset_dev //hold reset_lock
>> -> nvme_setup_io_queues
>> -> nvme_create_io_queues
>> -> nvme_create_queue
>> -> set nvmeq->cq_vector
>> ->
Hi ming
Please refer to my test log and analysis.
[ 229.872622] nvme nvme0: I/O 164 QID 1 timeout, reset controller
[ 229.872649] nvme nvme0: EH 0: before shutdown
[ 229.872683] nvme nvme0: I/O 165 QID 1 timeout, reset controller
[ 229.872700] nvme nvme0: I/O 166 QID 1 timeout, reset controll
Hi Bart
On 05/14/2018 12:03 PM, Bart Van Assche wrote:
> On Mon, 2018-05-14 at 09:37 +0800, jianchao.wang wrote:
>> In addition, on a 64bit system, how do you set up the timer with a 32bit
>> deadline ?
>
> If timeout handling occurs less than (1 << 31) / HZ seconds
Hi Bart
On 05/11/2018 11:26 PM, Bart Van Assche wrote:
> The bug is in the above trace_printk() call: blk_rq_deadline() must only be
> used
> for the legacy block layer and not for blk-mq code. If you have a look at the
> value
> of the das.deadline field then one can see that the value of that
Hi bart
On 05/11/2018 11:29 PM, Bart Van Assche wrote:
> On Fri, 2018-05-11 at 14:35 +0200, Christoph Hellwig wrote:
>>> It should be due to union blk_deadline_and_state.
>>> +union blk_deadline_and_state {
>>> + struct {
>>> + uint32_t generation:30;
>>> + uint32_t state:2
Hi bart
I add debug log in blk_mq_add_timer as following
void blk_mq_add_timer(struct request *req, enum mq_rq_state old,
enum mq_rq_state new)
{
struct request_queue *q = req->q;
if (!req->timeout)
req->timeout = q->rq_timeout;
if (!blk_m
Hi ming
I did some tests on my local.
[ 598.828578] nvme nvme0: I/O 51 QID 4 timeout, disable controller
This should be a timeout on nvme_reset_dev->nvme_wait_freeze.
[ 598.828743] nvme nvme0: EH 1: before shutdown
[ 599.013586] nvme nvme0: EH 1: after shutdown
[ 599.137197] nvme nvme0: EH
Hi ming
On 05/04/2018 04:02 PM, Ming Lei wrote:
>> nvme_error_handler should invoke nvme_reset_ctrl instead of introducing
>> another interface.
>> Then it is more convenient to ensure that there will be only one resetting
>> instance running.
>>
> But as you mentioned above, reset_work has to b
Oh sorry.
On 05/04/2018 02:10 PM, jianchao.wang wrote:
> nvme_error_handler should invoke nvme_reset_ctrl instead of introducing
> another interface.
> Then it is more convenient to ensure that there will be only one resetting
> instance running.
ctrl state is still in RES
Hi ming
On 05/04/2018 12:24 PM, Ming Lei wrote:
>> Just invoke nvme_dev_disable in nvme_error_handler context and hand over the
>> other things
>> to nvme_reset_work as the v2 patch series seems clearer.
> That way may not fix the current race: nvme_dev_disable() will
> quiesce/freeze queue again
Hi Ming
Thanks for your kindly response.
On 05/03/2018 06:08 PM, Ming Lei wrote:
> nvme_eh_reset() can move on, if controller state is either CONNECTING or
> RESETTING, nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING) won't
> be called in nvme_eh_reset(), and nvme_pre_reset_dev() will be c
Hi ming
On 05/03/2018 11:17 AM, Ming Lei wrote:
> static int io_queue_depth_set(const char *val, const struct kernel_param *kp)
> @@ -1199,7 +1204,7 @@ static enum blk_eh_timer_return nvme_timeout(struct
> request *req, bool reserved)
> if (nvme_should_reset(dev, csts)) {
> n
Hi Ming
On 05/02/2018 11:33 AM, Ming Lei wrote:
> No, there isn't such race, the 'mod_timer' doesn't make a difference
> because 'q->timeout_off' will be visible in new work func after
> cancel_work_sync() returns. So even the timer is expired, work func
> still returns immediately.
Yes, you are
Hi Ming
On 05/02/2018 12:54 PM, Ming Lei wrote:
>> We need to return BLK_EH_RESET_TIMER in nvme_timeout then:
>> 1. defer the completion. we can't unmap the io request before close the
>> controller totally, so not BLK_EH_HANDLED.
>> 2. nvme_cancel_request could complete it. blk_mq_complete_reque
Hi Ming
On 04/29/2018 11:41 PM, Ming Lei wrote:
> +
> static enum blk_eh_timer_return nvme_timeout(struct request *req, bool
> reserved)
> {
> struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
> @@ -1197,8 +1297,7 @@ static enum blk_eh_timer_return nvme_timeout(struct
> request *req, bool re
Hi ming
On 04/29/2018 11:41 PM, Ming Lei wrote:
>
> +static void __blk_unquiesce_timeout(struct request_queue *q)
> +{
> + unsigned long flags;
> +
> + spin_lock_irqsave(q->queue_lock, flags);
> + q->timeout_off = false;
> + spin_unlock_irqrestore(q->queue_lock, flags);
> +}
> +
Hi ming
On 04/29/2018 09:36 AM, Ming Lei wrote:
> On Sun, Apr 29, 2018 at 6:27 AM, Ming Lei wrote:
>> On Sun, Apr 29, 2018 at 5:57 AM, Ming Lei wrote:
>>> On Sat, Apr 28, 2018 at 10:00 PM, jianchao.wang
>>> wrote:
>>>> Hi ming
>>>>
>>&
Hi Ming and Keith
Let me detail extend more here. :)
On 04/28/2018 09:35 PM, Keith Busch wrote:
>> Actually there isn't the case before, even for legacy path, one .timeout()
>> handles one request only.
Yes, .timeout should be invoked for every timeout request and .timeout should
also
handle th
Hi ming
On 04/27/2018 10:57 PM, Ming Lei wrote:
> I may not understand your point, once blk_sync_queue() returns, the
> timer itself is deactivated, meantime the synced .nvme_timeout() only
> returns EH_NOT_HANDLED before the deactivation.
>
> That means this timer won't be expired any more, so c
Hi Tejun and Joseph
On 04/27/2018 02:32 AM, Tejun Heo wrote:
> Hello,
>
> On Tue, Apr 24, 2018 at 02:12:51PM +0200, Paolo Valente wrote:
>> +Tejun (I guess he might be interested in the results below)
>
> Our experiments didn't work out too well either. At this point, it
> isn't clear whether i
On 04/26/2018 11:57 PM, Ming Lei wrote:
> Hi Jianchao,
>
> On Thu, Apr 26, 2018 at 11:07:56PM +0800, jianchao.wang wrote:
>> Hi Ming
>>
>> Thanks for your wonderful solution. :)
>>
>> On 04/26/2018 08:39 PM, Ming Lei wrote:
>>> +/*
>>> +
Hi Ming
Thanks for your wonderful solution. :)
On 04/26/2018 08:39 PM, Ming Lei wrote:
> +/*
> + * This one is called after queues are quiesced, and no in-fligh timeout
> + * and nvme interrupt handling.
> + */
> +static void nvme_pci_cancel_request(struct request *req, void *data,
> +
Hi Paolo
When I test execute the script, I got this
8:0 rbps=1000 wbps=0 riops=0 wiops=0 idle=0 latency=max
The idle is 0.
I'm afraid the io.low would not work.
Please refer to the following code in tg_set_limit
/* force user to configure all settings for low limit */
if (!(
Hi Paolo
On 04/23/2018 01:32 PM, Paolo Valente wrote:
> Thanks for sharing this fix. I tried it too, but nothing changes in
> my test :(>
That's really sad.
> At this point, my doubt is still: am I getting io.low limit right? I
> understand that an I/O-bound group should be guaranteed a rbps
rade_time + td->throtl_slice) &&
time_after_eq(now, tg_last_low_overflow_time(tg) +
td->throtl_slice) &&
- (!throtl_tg_is_idle(tg) ||
+ (!throtl_tg_is_idle(tg, false) ||
!list_empty(&tg_t
Hi Paolo
I used to meet similar issue on io.low.
Can you try the following patch to see whether the issue could be fixed.
https://marc.info/?l=linux-block&m=152325456307423&w=2
https://marc.info/?l=linux-block&m=152325457607425&w=2
Thanks
Jianchao
On 04/22/2018 05:23 PM, Paolo Valente wrote:
> H
On 04/21/2018 10:10 PM, Jens Axboe wrote:
> On 4/21/18 7:34 AM, jianchao.wang wrote:
>> Hi Bart
>>
>> Thanks for your kindly response.
>>
>> On 04/20/2018 10:11 PM, Bart Van Assche wrote:
>>> On Fri, 2018-04-20 at 14:55 +0800, jianchao.wang wrote:
>
Hi Bart
Thanks for your kindly response.
On 04/20/2018 10:11 PM, Bart Van Assche wrote:
> On Fri, 2018-04-20 at 14:55 +0800, jianchao.wang wrote:
>> Hi Bart
>>
>> On 04/20/2018 12:43 AM, Bart Van Assche wrote:
>>> Use the deadline instead of the request generation
Hi Bart
On 04/20/2018 12:43 AM, Bart Van Assche wrote:
> Use the deadline instead of the request generation to detect whether
> or not a request timer fired after reinitialization of a request
Maybe use deadline to do this is not suitable.
Let's think of the following scenario.
T1/T2 times in
Hi Martin
On 04/17/2018 08:10 PM, Martin Steigerwald wrote:
> For testing it I add it to 4.16.2 with the patches I have already?
You could try to only apply this patch to have a test. :)
>
> - '[PATCH] blk-mq_Directly schedule q->timeout_work when aborting a
> request.mbox'
>
> - '[PATCH v2]
Hi bart
Thanks for your kindly response.
I have sent out the patch. Please refer to
https://marc.info/?l=linux-block&m=152393666517449&w=2
Thanks
Jianchao
On 04/17/2018 08:15 AM, Bart Van Assche wrote:
> On Tue, 2018-04-17 at 00:04 +0800, jianchao.wang wrote:
>> diff --git a
Hi Martin and Ming
Regarding to the issue "RIP: scsi_times_out+0x17",
the rq->gstate and rq->aborted_gstate both are zero before the requests are
allocated.
looks like the timeout value of scsi in Martin's system is small.
when the request_queue timer fires, if there is a request which is alloca
Hi Ming
Thanks for your kindly response.
On 04/16/2018 04:15 PM, Ming Lei wrote:
>> -if (!blk_mq_get_dispatch_budget(hctx))
>> +if (!blk_mq_get_dispatch_budget(hctx)) {
>> +blk_mq_sched_mark_restart_hctx(hctx);
> The RESTART flag still may not take into
Hi Ming
On 04/12/2018 07:38 AM, Ming Lei wrote:
> + *
> + * Cover complete vs BLK_EH_RESET_TIMER race in slow path with
> + * helding queue lock.
>*/
> hctx_lock(hctx, &srcu_idx);
> if (blk_mq_rq_aborted_gstate(rq) != rq->gstate)
> __blk_mq_complete
Hi Bart
Thanks for your kindly response.
On 04/10/2018 09:01 PM, Bart Van Assche wrote:
> On Tue, 2018-04-10 at 15:59 +0800, jianchao.wang wrote:
>> If yes, how does the timeout handler get the freed request when the tag has
>> been freed ?
>
> Hello Jianchao,
>
>
Hi Bart
On 04/10/2018 09:34 AM, Bart Van Assche wrote:
> If a completion occurs after blk_mq_rq_timed_out() has reset
> rq->aborted_gstate and the request is again in flight when the timeout
> expires then a request will be completed twice: a first time by the
> timeout handler and a second time w
Hi Jose
On 03/27/2018 05:25 AM, jos...@linux.vnet.ibm.com wrote:
> Hello everyone!
>
> I'm running Ubuntu 18.04 (4.15.0-12-generic) in KVM/QEMU (powerpc64le).
> Everything looks good until I try to hotplug CPUs in my VM. As soon as I
> do that I get the following error in my VM dmesg:
Please ref
Hi Keith
Thanks for your time and patch for this.
On 03/24/2018 06:19 AM, Keith Busch wrote:
> The PCI interrupt vectors intended to be associated with a queue may
> not start at 0. This patch adds an offset parameter so blk-mq may find
> the intended affinity mask. The default value is 0 so exis
Sorry, Jens, I think I didn't get the point.
Do I miss anything ?
On 02/01/2018 11:07 AM, Jens Axboe wrote:
> Yeah I agree, and my last patch missed that we do care about segments for
> discards. Below should be better...
>
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 8452fc7164cc
Hi Jens
On 01/31/2018 11:29 PM, Jens Axboe wrote:
> How about something like the below?
>
>
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 8452fc7164cc..cee102fb060e 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -574,8 +574,13 @@ static int ll_merge_requests_fn(s
Hi Jens
On 01/30/2018 11:57 PM, Jens Axboe wrote:
> On 1/30/18 8:41 AM, Jens Axboe wrote:
>> Hi,
>>
>> I just hit this on 4.15+ on the laptop, it's running Linus' git
>> as of yesterday, right after the block tree merge:
>>
>> commit 0a4b6e2f80aad46fb55a5cf7b1664c0aef030ee0
>> Merge: 9697e9da8429
Hi ming
Sorry for delayed report this.
On 01/17/2018 05:57 PM, Ming Lei wrote:
> 2) hctx->next_cpu can become offline from online before __blk_mq_run_hw_queue
> is run, there isn't warning, but once the IO is submitted to hardware,
> after it is completed, how does the HBA/hw queue notify CPU sin
Hi ming
Thanks for your kindly response.
On 01/17/2018 02:22 PM, Ming Lei wrote:
> This warning can't be removed completely, for example, the CPU figured
> in blk_mq_hctx_next_cpu(hctx) can be put on again just after the
> following call returns and before __blk_mq_run_hw_queue() is scheduled
>
Hi ming
Thanks for your kindly response.
On 01/17/2018 11:52 AM, Ming Lei wrote:
>> It is here.
>> __blk_mq_run_hw_queue()
>>
>> WARN_ON(!cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask) &&
>> cpu_online(hctx->next_cpu));
> I think this warning is triggered after the CPU o
Hi ming
Thanks for your patch and kindly response.
On 01/16/2018 11:32 PM, Ming Lei wrote:
> OK, I got it, and it should have been the only corner case in which
> all CPUs mapped to this hctx become offline, and I believe the following
> patch should address this case, could you give a test?
>
>
Hi minglei
On 01/16/2018 08:10 PM, Ming Lei wrote:
>>> - next_cpu = cpumask_next(hctx->next_cpu, hctx->cpumask);
>>> + next_cpu = cpumask_next_and(hctx->next_cpu, hctx->cpumask,
>>> + cpu_online_mask);
>>> if (next_cpu >= nr_cpu_ids)
>>> -
Hi Ming
On 01/12/2018 10:53 AM, Ming Lei wrote:
> From: Christoph Hellwig
>
> The previous patch assigns interrupt vectors to all possible CPUs, so
> now hctx can be mapped to possible CPUs, this patch applies this fact
> to simplify queue mapping & schedule so that we don't need to handle
> CPU
On 01/13/2018 05:19 AM, Bart Van Assche wrote:
> Sorry but I only retrieved the blk-mq debugfs several minutes after the hang
> started so I'm not sure the state information is relevant. Anyway, I have
> attached
> it to this e-mail. The most remarkable part is the following:
>
> ./9ddf
On 12/15/2017 03:31 PM, Peter Zijlstra wrote:
> On Fri, Dec 15, 2017 at 10:12:50AM +0800, jianchao.wang wrote:
>>> That only makes it a little better:
>>>
>>> Task-A Worker
>>>
>>> write_seqcount_begi
On 12/15/2017 05:54 AM, Peter Zijlstra wrote:
> On Thu, Dec 14, 2017 at 09:42:48PM +, Bart Van Assche wrote:
>> On Thu, 2017-12-14 at 21:20 +0100, Peter Zijlstra wrote:
>>> On Thu, Dec 14, 2017 at 06:51:11PM +, Bart Van Assche wrote:
On Tue, 2017-12-12 at 11:01 -0800, Tejun Heo wrote
On 12/14/2017 12:13 AM, Tejun Heo wrote:
> Hello,
>
> On Wed, Dec 13, 2017 at 11:30:48AM +0800, jianchao.wang wrote:
>>> + } else {
>>> + srcu_idx = srcu_read_lock(hctx->queue_rq_srcu);
>>> + if (!blk_mark_rq_complete(rq))
>>&g
gt;
> v2: - Fixed BLK_EH_RESET_TIMER handling as pointed out by Jianchao.
> - s/request->gstate_seqc/request->gstate_seq/ as suggested by Peter.
> - READ_ONCE() added in blk_mq_rq_update_state() as suggested by Peter.
>
> Signed-off-by: Tejun Heo
> Cc: "jianch
Hello tejun
Sorry for missing the V2, same comment again.
On 12/13/2017 03:01 AM, Tejun Heo wrote:
> Currently, blk-mq protects only the issue path with RCU. This patch
> puts the completion path under the same RCU protection. This will be
> used to synchronize issue/completion against timeout
On 09/21/2017 09:29 AM, Christoph Hellwig wrote:
> So the check change here looks good to me.
>
> I don't like like the duplicate code, can you look into sharing
> the new segment checks between the two functions and the existing
> instance in ll_merge_requests_fn by passing say two struct bio *
On 09/21/2017 09:29 AM, Christoph Hellwig wrote:
> So the check change here looks good to me.
>
> I don't like like the duplicate code, can you look into sharing
> the new segment checks between the two functions and the existing
> instance in ll_merge_requests_fn by passing say two struct bio *
On 09/19/2017 10:36 PM, Christoph Hellwig wrote:
> On Tue, Sep 19, 2017 at 08:55:59AM +0800, jianchao.wang wrote:
>>> But can you elaborate a little more on how this found and if there
>>> is a way to easily reproduce it, say for a blktests test case?
>>>
>>
On 09/19/2017 07:51 AM, Christoph Hellwig wrote:
> On Sat, Sep 16, 2017 at 07:10:30AM +0800, Jianchao Wang wrote:
>> If the bio_integrity_merge_rq() return false or nr_phys_segments exceeds
>> the max_segments, the merging fails, but the bi_front/back_seg_size may
>> have been modified. To avoid
On 09/13/2017 11:54 AM, Jens Axboe wrote:
> On 09/12/2017 09:39 PM, jianchao.wang wrote:
>>> Exactly, and especially the readability is the key element here. It's
>>> just not worth it to try and be too clever, especially not for
>>> something like this. When
On 09/13/2017 10:45 AM, Jens Axboe wrote:
@@ -1029,14 +1029,20 @@ bool blk_mq_dispatch_rq_list(struct
request_queue *q, struct list_head *list)
if (list_empty(list))
bd.last = true;
else {
On 09/13/2017 10:23 AM, Jens Axboe wrote:
> On 09/12/2017 07:39 PM, jianchao.wang wrote:
>>
>>
>> On 09/13/2017 09:24 AM, Ming Lei wrote:
>>> On Wed, Sep 13, 2017 at 09:01:25AM +0800, jianchao.wang wrote:
>>>> Hi ming
>>>>
>>>> O
On 09/13/2017 09:24 AM, Ming Lei wrote:
> On Wed, Sep 13, 2017 at 09:01:25AM +0800, jianchao.wang wrote:
>> Hi ming
>>
>> On 09/12/2017 06:23 PM, Ming Lei wrote:
>>>> @@ -1029,14 +1029,20 @@ bool blk_mq_dispatch_rq_list(struct request_queue
>>>> *
Hi ming
On 09/12/2017 06:23 PM, Ming Lei wrote:
>> @@ -1029,14 +1029,20 @@ bool blk_mq_dispatch_rq_list(struct request_queue
>> *q, struct list_head *list)
>> if (list_empty(list))
>> bd.last = true;
>> else {
>> -struct request *
101 - 180 of 180 matches
Mail list logo