type_ptr += type_ptr[3] + 4; > here
}
Then the typr_ptr got out of bound of the buffer.
Thanks
Jianchao
On 3/14/19 11:19 AM, jianchao.wang wrote:
> Dear all
>
> When our customer probe the lpfc devices, they encountered odd memory
> corruption issues,
> and we g
On 2/15/19 11:14 AM, Ming Lei wrote:
> On Fri, Feb 15, 2019 at 10:34:39AM +0800, jianchao.wang wrote:
>> Hi Ming
>>
>> Thanks for your kindly response.
>>
>> On 2/15/19 10:00 AM, Ming Lei wrote:
>>> On Tue, Feb 12, 2019 at 09:56:25AM +08
Hi Ming
Thanks for your kindly response.
On 2/15/19 10:00 AM, Ming Lei wrote:
> On Tue, Feb 12, 2019 at 09:56:25AM +0800, Jianchao Wang wrote:
>> When requeue, if RQF_DONTPREP, rq has contained some driver
>> specific data, so insert it to hctx dispatch list to avoid any
>> merge. Take scsi as
Hi Jens
Thanks for your kindly response.
On 2/12/19 7:20 AM, Jens Axboe wrote:
> On 2/11/19 4:15 PM, Jens Axboe wrote:
>> On 2/11/19 8:59 AM, Jens Axboe wrote:
>>> On 2/10/19 10:41 PM, Jianchao Wang wrote:
When requeue, if RQF_DONTPREP, rq has contained some driver
specific data, so
On 12/31/18 12:27 AM, Tariq Toukan wrote:
>
>
> On 1/27/2018 2:41 PM, jianchao.wang wrote:
>> Hi Tariq
>>
>> Thanks for your kindly response.
>> That's really appreciated.
>>
>> On 01/25/2018 05:54 PM, Tariq Toukan wrote:
>>>
>>&g
Ping ?
Thanks
Jianchao
On 12/10/18 11:01 AM, Jianchao Wang wrote:
> Hi Jens
>
> Please consider this patchset for 4.21.
>
> It refactors the code of issue request directly to unify the interface
> and make the code clearer and more readable.
>
> The 1st patch refactors the code of issue
On 12/7/18 11:47 AM, Jens Axboe wrote:
> On 12/6/18 8:46 PM, jianchao.wang wrote:
>>
>>
>> On 12/7/18 11:42 AM, Jens Axboe wrote:
>>> On 12/6/18 8:41 PM, jianchao.wang wrote:
>>>>
>>>>
>>>> On 12/7/18 11:34 AM, Jens Axboe wrote:
On 12/7/18 11:42 AM, Jens Axboe wrote:
> On 12/6/18 8:41 PM, jianchao.wang wrote:
>>
>>
>> On 12/7/18 11:34 AM, Jens Axboe wrote:
>>> On 12/6/18 8:32 PM, Jens Axboe wrote:
>>>> On 12/6/18 8:26 PM, jianchao.wang wrote:
>>>>>
>>>&
On 12/7/18 11:34 AM, Jens Axboe wrote:
> On 12/6/18 8:32 PM, Jens Axboe wrote:
>> On 12/6/18 8:26 PM, jianchao.wang wrote:
>>>
>>>
>>> On 12/7/18 11:16 AM, Jens Axboe wrote:
>>>> On 12/6/18 8:09 PM, Jianchao Wang wrote:
>>>>
On 12/7/18 11:16 AM, Jens Axboe wrote:
> On 12/6/18 8:09 PM, Jianchao Wang wrote:
>> Hi Jens
>>
>> Please consider this patchset for 4.21.
>>
>> It refactors the code of issue request directly to unify the interface
>> and make the code clearer and more readable.
>>
>> This patch set is rebased
On 12/6/18 11:19 PM, Jens Axboe wrote:
> On 12/5/18 8:32 PM, Jianchao Wang wrote:
>> It is not necessary to issue request directly with bypass 'true'
>> in blk_mq_sched_insert_requests and handle the non-issued requests
>> itself. Just set bypass to 'false' and let blk_mq_try_issue_directly
>>
Hi Ming
On 10/29/18 10:49 AM, Ming Lei wrote:
> On Sat, Oct 27, 2018 at 12:01:09AM +0800, Jianchao Wang wrote:
>> Merge blk_mq_try_issue_directly and __blk_mq_try_issue_directly
>> into one interface which is able to handle the return value from
>> .queue_rq callback. Due to we can only issue
Hi Ming
On 10/29/18 10:49 AM, Ming Lei wrote:
> On Sat, Oct 27, 2018 at 12:01:09AM +0800, Jianchao Wang wrote:
>> Merge blk_mq_try_issue_directly and __blk_mq_try_issue_directly
>> into one interface which is able to handle the return value from
>> .queue_rq callback. Due to we can only issue
Hi Tejun
Thanks for your kindly response.
On 09/21/2018 04:53 AM, Tejun Heo wrote:
> Hello,
>
> On Thu, Sep 20, 2018 at 06:18:21PM +0800, Jianchao Wang wrote:
>> -static inline void percpu_ref_get_many(struct percpu_ref *ref, unsigned
>> long nr)
>> +static inline void
Hi Tejun
Thanks for your kindly response.
On 09/21/2018 04:53 AM, Tejun Heo wrote:
> Hello,
>
> On Thu, Sep 20, 2018 at 06:18:21PM +0800, Jianchao Wang wrote:
>> -static inline void percpu_ref_get_many(struct percpu_ref *ref, unsigned
>> long nr)
>> +static inline void
Hi Jens
On 08/25/2018 11:41 PM, Jens Axboe wrote:
> do {
> - set_current_state(TASK_UNINTERRUPTIBLE);
> + if (test_bit(0, ))
> + break;
>
> - if (!has_sleeper && rq_wait_inc_below(rqw, get_limit(rwb, rw)))
> +
Hi Jens
On 08/25/2018 11:41 PM, Jens Axboe wrote:
> do {
> - set_current_state(TASK_UNINTERRUPTIBLE);
> + if (test_bit(0, ))
> + break;
>
> - if (!has_sleeper && rq_wait_inc_below(rqw, get_limit(rwb, rw)))
> +
Hi Ming
On 07/31/2018 12:58 PM, Ming Lei wrote:
> On Tue, Jul 31, 2018 at 12:02:15PM +0800, Jianchao Wang wrote:
>> Currently, we will always set SCHED_RESTART whenever there are
>> requests in hctx->dispatch, then when request is completed and
>> freed the hctx queues will be restarted to avoid
Hi Ming
On 07/31/2018 12:58 PM, Ming Lei wrote:
> On Tue, Jul 31, 2018 at 12:02:15PM +0800, Jianchao Wang wrote:
>> Currently, we will always set SCHED_RESTART whenever there are
>> requests in hctx->dispatch, then when request is completed and
>> freed the hctx queues will be restarted to avoid
Hi Keith
On 06/20/2018 12:39 AM, Keith Busch wrote:
> On Tue, Jun 19, 2018 at 04:30:50PM +0800, Jianchao Wang wrote:
>> There is race between nvme_remove and nvme_reset_work that can
>> lead to io hang.
>>
>> nvme_removenvme_reset_work
>> -> change state to DELETING
>>
Hi Keith
On 06/20/2018 12:39 AM, Keith Busch wrote:
> On Tue, Jun 19, 2018 at 04:30:50PM +0800, Jianchao Wang wrote:
>> There is race between nvme_remove and nvme_reset_work that can
>> lead to io hang.
>>
>> nvme_removenvme_reset_work
>> -> change state to DELETING
>>
On 06/20/2018 09:35 AM, Bart Van Assche wrote:
> On Wed, 2018-06-20 at 09:28 +0800, jianchao.wang wrote:
>> Hi Bart
>>
>> Thanks for your kindly response.
>>
>> On 06/19/2018 11:18 PM, Bart Van Assche wrote:
>>> On Tue, 2018-06-19 at 15:00 +0800
On 06/20/2018 09:35 AM, Bart Van Assche wrote:
> On Wed, 2018-06-20 at 09:28 +0800, jianchao.wang wrote:
>> Hi Bart
>>
>> Thanks for your kindly response.
>>
>> On 06/19/2018 11:18 PM, Bart Van Assche wrote:
>>> On Tue, 2018-06-19 at 15:00 +0800
_request.
however, the scsi recovery context could clear the ATOM_COMPLETE and requeue
the request before irq
context get it.
Thanks
Jianchao
>
> On 5/28/18, 6:11 PM, "jianchao.wang" wrote:
>
> Hi Himanshu
>
> do you need any other information ?
>
_request.
however, the scsi recovery context could clear the ATOM_COMPLETE and requeue
the request before irq
context get it.
Thanks
Jianchao
>
> On 5/28/18, 6:11 PM, "jianchao.wang" wrote:
>
> Hi Himanshu
>
> do you need any other information ?
>
Hi Omar
Thanks for your kindly and detailed comment.
That's really appreciated. :)
On 05/30/2018 02:55 AM, Omar Sandoval wrote:
> On Wed, May 23, 2018 at 02:33:22PM +0800, Jianchao Wang wrote:
>> Currently, kyber is very unfriendly with merging. kyber depends
>> on ctx rq_list to do merging,
Hi Omar
Thanks for your kindly and detailed comment.
That's really appreciated. :)
On 05/30/2018 02:55 AM, Omar Sandoval wrote:
> On Wed, May 23, 2018 at 02:33:22PM +0800, Jianchao Wang wrote:
>> Currently, kyber is very unfriendly with merging. kyber depends
>> on ctx rq_list to do merging,
Hi Himanshu
do you need any other information ?
Thanks
Jianchao
On 05/25/2018 02:48 PM, jianchao.wang wrote:
> Hi Himanshu
>
> I'm afraid I cannot provide you the vmcore file, it is from our customer.
> If any information needed in the vmcore, I could provide with you.
>
Hi Himanshu
do you need any other information ?
Thanks
Jianchao
On 05/25/2018 02:48 PM, jianchao.wang wrote:
> Hi Himanshu
>
> I'm afraid I cannot provide you the vmcore file, it is from our customer.
> If any information needed in the vmcore, I could provide with you.
>
d for us to look at this in details.
>
> Can you provide me crash/vmlinux/modules for details analysis.
>
> Thanks,
> himanshu
>
> On 5/24/18, 6:49 AM, "Madhani, Himanshu" <himanshu.madh...@cavium.com> wrote:
>
>
> > On May 24, 2018, at 2:09
d for us to look at this in details.
>
> Can you provide me crash/vmlinux/modules for details analysis.
>
> Thanks,
> himanshu
>
> On 5/24/18, 6:49 AM, "Madhani, Himanshu" wrote:
>
>
> > On May 24, 2018, at 2:09 AM, jianchao.wang
> w
his issue.
>
> Thanks,
> Himanshu
>
>> -Original Message-
>> From: jianchao.wang [mailto:jianchao.w.w...@oracle.com]
>> Sent: Wednesday, May 23, 2018 6:51 PM
>> To: Dept-Eng QLA2xxx Upstream <qla2xxx-upstr...@cavium.com>; Madhani,
>> Himansh
his issue.
>
> Thanks,
> Himanshu
>
>> -Original Message-
>> From: jianchao.wang [mailto:jianchao.w.w...@oracle.com]
>> Sent: Wednesday, May 23, 2018 6:51 PM
>> To: Dept-Eng QLA2xxx Upstream ; Madhani,
>> Himanshu ; jthumsh...@suse.de
>> Cc:
Would anyone please take a look at this ?
Thanks in advance
Jianchao
On 05/23/2018 11:55 AM, jianchao.wang wrote:
>
>
> Hi all
>
> Our customer met a panic triggered by BUG_ON in blk_finish_request.
>>From the dmesg log, the BUG_ON was triggered after command abort o
Would anyone please take a look at this ?
Thanks in advance
Jianchao
On 05/23/2018 11:55 AM, jianchao.wang wrote:
>
>
> Hi all
>
> Our customer met a panic triggered by BUG_ON in blk_finish_request.
>>From the dmesg log, the BUG_ON was triggered after command abort o
Hi all
Our customer met a panic triggered by BUG_ON in blk_finish_request.
>From the dmesg log, the BUG_ON was triggered after command abort occurred many
>times.
There is a race condition in the following scenario.
cpu A cpu B
kworker
Hi all
Our customer met a panic triggered by BUG_ON in blk_finish_request.
>From the dmesg log, the BUG_ON was triggered after command abort occurred many
>times.
There is a race condition in the following scenario.
cpu A cpu B
kworker
Hi Jens and Holger
Thank for your kindly response.
That's really appreciated.
I will post next version based on Jens' patch.
Thanks
Jianchao
On 05/23/2018 02:32 AM, Holger Hoffstätte wrote:
This looks great but prevents kyber from being built as module,
which is AFAIK supposed to
Hi Jens and Holger
Thank for your kindly response.
That's really appreciated.
I will post next version based on Jens' patch.
Thanks
Jianchao
On 05/23/2018 02:32 AM, Holger Hoffstätte wrote:
This looks great but prevents kyber from being built as module,
which is AFAIK supposed to
Hi Omar
Thanks for your kindly response.
On 05/23/2018 04:02 AM, Omar Sandoval wrote:
> On Tue, May 22, 2018 at 10:48:29PM +0800, Jianchao Wang wrote:
>> Currently, kyber is very unfriendly with merging. kyber depends
>> on ctx rq_list to do merging, however, most of time, it will not
>> leave
Hi Omar
Thanks for your kindly response.
On 05/23/2018 04:02 AM, Omar Sandoval wrote:
> On Tue, May 22, 2018 at 10:48:29PM +0800, Jianchao Wang wrote:
>> Currently, kyber is very unfriendly with merging. kyber depends
>> on ctx rq_list to do merging, however, most of time, it will not
>> leave
Hi Max
Thanks for kindly review and suggestion for this.
On 05/16/2018 08:18 PM, Max Gurtovoy wrote:
> I don't know exactly what Christoph meant but IMO the best place to allocate
> it is in nvme_rdma_alloc_queue just before calling
>
> "set_bit(NVME_RDMA_Q_ALLOCATED, >flags);"
>
> then you
Hi Max
Thanks for kindly review and suggestion for this.
On 05/16/2018 08:18 PM, Max Gurtovoy wrote:
> I don't know exactly what Christoph meant but IMO the best place to allocate
> it is in nvme_rdma_alloc_queue just before calling
>
> "set_bit(NVME_RDMA_Q_ALLOCATED, >flags);"
>
> then you
Hi Sagi
On 05/09/2018 11:06 PM, Sagi Grimberg wrote:
> The correct fix would be to add a tag for stop_queue and call
> nvme_rdma_stop_queue() in all the failure cases after
> nvme_rdma_start_queue.
Would you please look at the V2 in following link ?
Hi Sagi
On 05/09/2018 11:06 PM, Sagi Grimberg wrote:
> The correct fix would be to add a tag for stop_queue and call
> nvme_rdma_stop_queue() in all the failure cases after
> nvme_rdma_start_queue.
Would you please look at the V2 in following link ?
Hi Christoph
On 05/07/2018 08:27 PM, Christoph Hellwig wrote:
> On Fri, May 04, 2018 at 04:02:18PM +0800, Jianchao Wang wrote:
>> BUG: KASAN: double-free or invalid-free in nvme_rdma_free_queue+0xf6/0x110
>> [nvme_rdma]
>> Workqueue: nvme-reset-wq nvme_rdma_reset_ctrl_work [nvme_rdma]
>> Call
Hi Christoph
On 05/07/2018 08:27 PM, Christoph Hellwig wrote:
> On Fri, May 04, 2018 at 04:02:18PM +0800, Jianchao Wang wrote:
>> BUG: KASAN: double-free or invalid-free in nvme_rdma_free_queue+0xf6/0x110
>> [nvme_rdma]
>> Workqueue: nvme-reset-wq nvme_rdma_reset_ctrl_work [nvme_rdma]
>> Call
Hi Max
On 04/27/2018 04:51 PM, jianchao.wang wrote:
> Hi Max
>
> On 04/26/2018 06:23 PM, Max Gurtovoy wrote:
>> Hi Jianchao,
>> I actually tried this scenario with real HW and was able to repro the hang.
>> Unfortunatly, after applying your patch I got NULL deref:
>
Hi Max
On 04/27/2018 04:51 PM, jianchao.wang wrote:
> Hi Max
>
> On 04/26/2018 06:23 PM, Max Gurtovoy wrote:
>> Hi Jianchao,
>> I actually tried this scenario with real HW and was able to repro the hang.
>> Unfortunatly, after applying your patch I got NULL deref:
>
I'll add IsraelR proposed fix to nvme-rdma that is currently on hold and see
> what happens.
> Nontheless, I don't like the situation that the reset and delete flows can
> run concurrently.
>
> -Max.
>
> On 4/26/2018 11:27 AM, jianchao.wang wrote:
>> Hi Max
>>
>
I'll add IsraelR proposed fix to nvme-rdma that is currently on hold and see
> what happens.
> Nontheless, I don't like the situation that the reset and delete flows can
> run concurrently.
>
> -Max.
>
> On 4/26/2018 11:27 AM, jianchao.wang wrote:
>> Hi Max
>>
>
Hi Tejun and Joseph
On 04/27/2018 02:32 AM, Tejun Heo wrote:
> Hello,
>
> On Tue, Apr 24, 2018 at 02:12:51PM +0200, Paolo Valente wrote:
>> +Tejun (I guess he might be interested in the results below)
>
> Our experiments didn't work out too well either. At this point, it
> isn't clear whether
Hi Tejun and Joseph
On 04/27/2018 02:32 AM, Tejun Heo wrote:
> Hello,
>
> On Tue, Apr 24, 2018 at 02:12:51PM +0200, Paolo Valente wrote:
>> +Tejun (I guess he might be interested in the results below)
>
> Our experiments didn't work out too well either. At this point, it
> isn't clear whether
-> blk_freeze_queue
This patch could also fix this issue.
Thanks
Jianchao
On 04/22/2018 11:00 PM, jianchao.wang wrote:
> Hi Max
>
> That's really appreciated!
> Here is my test script.
>
> loop_reset_controller.sh
> #!/bin/bash
> while true
> do
>
-> blk_freeze_queue
This patch could also fix this issue.
Thanks
Jianchao
On 04/22/2018 11:00 PM, jianchao.wang wrote:
> Hi Max
>
> That's really appreciated!
> Here is my test script.
>
> loop_reset_controller.sh
> #!/bin/bash
> while true
> do
>
018 10:48 PM, Max Gurtovoy wrote:
>
>
> On 4/22/2018 5:25 PM, jianchao.wang wrote:
>> Hi Max
>>
>> No, I only tested it on PCIe one.
>> And sorry for that I didn't state that.
>
> Please send your exact test steps and we'll run it using RDMA transport.
> I also w
018 10:48 PM, Max Gurtovoy wrote:
>
>
> On 4/22/2018 5:25 PM, jianchao.wang wrote:
>> Hi Max
>>
>> No, I only tested it on PCIe one.
>> And sorry for that I didn't state that.
>
> Please send your exact test steps and we'll run it using RDMA transport.
> I also w
gt;
> On 4/22/2018 4:32 PM, jianchao.wang wrote:
>> Hi keith
>>
>> Would you please take a look at this patch.
>>
>> This issue could be reproduced easily with a driver bind/unbind loop,
>> a reset loop and a IO loop at the same time.
>>
>&
gt;
> On 4/22/2018 4:32 PM, jianchao.wang wrote:
>> Hi keith
>>
>> Would you please take a look at this patch.
>>
>> This issue could be reproduced easily with a driver bind/unbind loop,
>> a reset loop and a IO loop at the same time.
>>
>&
Hi keith
Would you please take a look at this patch.
This issue could be reproduced easily with a driver bind/unbind loop,
a reset loop and a IO loop at the same time.
Thanks
Jianchao
On 04/19/2018 04:29 PM, Jianchao Wang wrote:
> There is race between nvme_remove and nvme_reset_work that can
Hi keith
Would you please take a look at this patch.
This issue could be reproduced easily with a driver bind/unbind loop,
a reset loop and a IO loop at the same time.
Thanks
Jianchao
On 04/19/2018 04:29 PM, Jianchao Wang wrote:
> There is race between nvme_remove and nvme_reset_work that can
Hi Ming
Thanks for your kindly response.
On 04/18/2018 11:40 PM, Ming Lei wrote:
>> Regarding to this patchset, it is mainly to fix the dependency between
>> nvme_timeout and nvme_dev_disable, as your can see:
>> nvme_timeout will invoke nvme_dev_disable, and nvme_dev_disable have to
>> depend
Hi Ming
Thanks for your kindly response.
On 04/18/2018 11:40 PM, Ming Lei wrote:
>> Regarding to this patchset, it is mainly to fix the dependency between
>> nvme_timeout and nvme_dev_disable, as your can see:
>> nvme_timeout will invoke nvme_dev_disable, and nvme_dev_disable have to
>> depend
Hi Ming
On 04/17/2018 11:17 PM, Ming Lei wrote:
> Looks blktest(block/011) can trigger IO hang easily on NVMe PCI device,
> and all are related with nvme_dev_disable():
>
> 1) admin queue may be disabled by nvme_dev_disable() from timeout path
> during resetting, then reset can't move on
>
> 2)
Hi Ming
On 04/17/2018 11:17 PM, Ming Lei wrote:
> Looks blktest(block/011) can trigger IO hang easily on NVMe PCI device,
> and all are related with nvme_dev_disable():
>
> 1) admin queue may be disabled by nvme_dev_disable() from timeout path
> during resetting, then reset can't move on
>
> 2)
Hi Martin
On 04/17/2018 08:10 PM, Martin Steigerwald wrote:
> For testing it I add it to 4.16.2 with the patches I have already?
You could try to only apply this patch to have a test. :)
>
> - '[PATCH] blk-mq_Directly schedule q->timeout_work when aborting a
> request.mbox'
>
> - '[PATCH v2]
Hi Martin
On 04/17/2018 08:10 PM, Martin Steigerwald wrote:
> For testing it I add it to 4.16.2 with the patches I have already?
You could try to only apply this patch to have a test. :)
>
> - '[PATCH] blk-mq_Directly schedule q->timeout_work when aborting a
> request.mbox'
>
> - '[PATCH v2]
Hi Ming
Thanks for your kindly response.
On 04/16/2018 04:15 PM, Ming Lei wrote:
>> -if (!blk_mq_get_dispatch_budget(hctx))
>> +if (!blk_mq_get_dispatch_budget(hctx)) {
>> +blk_mq_sched_mark_restart_hctx(hctx);
> The RESTART flag still may not take
Hi Ming
Thanks for your kindly response.
On 04/16/2018 04:15 PM, Ming Lei wrote:
>> -if (!blk_mq_get_dispatch_budget(hctx))
>> +if (!blk_mq_get_dispatch_budget(hctx)) {
>> +blk_mq_sched_mark_restart_hctx(hctx);
> The RESTART flag still may not take
Would anyone please take a review on this ?
Thanks in advance
Jianchao
On 04/10/2018 04:48 PM, Jianchao Wang wrote:
> If the cmd has not be returned after aborted by qla2x00_eh_abort,
> we have to wait for it. However, the time is 1000ms at least currently.
> If there are a lot cmds need to be
Would anyone please take a review on this ?
Thanks in advance
Jianchao
On 04/10/2018 04:48 PM, Jianchao Wang wrote:
> If the cmd has not be returned after aborted by qla2x00_eh_abort,
> we have to wait for it. However, the time is 1000ms at least currently.
> If there are a lot cmds need to be
Would anyone please take a review at this patch ?
Thanks in advace
Jianchao
On 03/07/2018 08:29 PM, Jianchao Wang wrote:
> iscsi tcp will first send out data, then calculate and send data
> digest. If we don't have BDI_CAP_STABLE_WRITES, the page cache will
> be written in spite of the on going
Would anyone please take a review at this patch ?
Thanks in advace
Jianchao
On 03/07/2018 08:29 PM, Jianchao Wang wrote:
> iscsi tcp will first send out data, then calculate and send data
> digest. If we don't have BDI_CAP_STABLE_WRITES, the page cache will
> be written in spite of the on going
Hi Keith
Would you please take a look at this patch.
I really need your suggestion on this.
Sincerely
Jianchao
On 03/09/2018 10:01 AM, jianchao.wang wrote:
> Hi Keith
>
> Can I have the honor of getting your comment on this patch?
>
> Thanks in advance
> Jianchao
>
>
Hi Keith
Would you please take a look at this patch.
I really need your suggestion on this.
Sincerely
Jianchao
On 03/09/2018 10:01 AM, jianchao.wang wrote:
> Hi Keith
>
> Can I have the honor of getting your comment on this patch?
>
> Thanks in advance
> Jianchao
>
>
Hi Keith
Thanks for your precious time for testing and reviewing.
I will send out V3 next.
Sincerely
Jianchao
On 03/13/2018 02:59 AM, Keith Busch wrote:
> Hi Jianchao,
>
> The patch tests fine on all hardware I had. I'd like to queue this up
> for the next 4.16-rc. Could you send a v3 with the
Hi Keith
Thanks for your precious time for testing and reviewing.
I will send out V3 next.
Sincerely
Jianchao
On 03/13/2018 02:59 AM, Keith Busch wrote:
> Hi Jianchao,
>
> The patch tests fine on all hardware I had. I'd like to queue this up
> for the next 4.16-rc. Could you send a v3 with the
Hi Keith
Can I have the honor of getting your comment on this patch?
Thanks in advance
Jianchao
On 03/08/2018 02:19 PM, Jianchao Wang wrote:
> nvme_dev_disable will issue command on adminq to clear HMB and
> delete io cq/sqs, maybe more in the future. When adminq no response,
> it has to
Hi Keith
Can I have the honor of getting your comment on this patch?
Thanks in advance
Jianchao
On 03/08/2018 02:19 PM, Jianchao Wang wrote:
> nvme_dev_disable will issue command on adminq to clear HMB and
> delete io cq/sqs, maybe more in the future. When adminq no response,
> it has to
Hi Sagi
Thanks for your precious time for review and comment.
On 03/09/2018 02:21 AM, Sagi Grimberg wrote:
>> +EXPORT_SYMBOL_GPL(nvme_abort_requests_sync);
>> +
>> +static void nvme_comp_req(struct request *req, void *data, bool reserved)
>
> Not a very good name...
Yes, indeed.
>
>> +{
>>
Hi Sagi
Thanks for your precious time for review and comment.
On 03/09/2018 02:21 AM, Sagi Grimberg wrote:
>> +EXPORT_SYMBOL_GPL(nvme_abort_requests_sync);
>> +
>> +static void nvme_comp_req(struct request *req, void *data, bool reserved)
>
> Not a very good name...
Yes, indeed.
>
>> +{
>>
Hi Ming
Thanks for your precious time for reviewing and comment.
On 03/08/2018 09:11 PM, Ming Lei wrote:
> On Thu, Mar 8, 2018 at 2:19 PM, Jianchao Wang
> wrote:
>> Currently, we use nvme_cancel_request to complete the request
>> forcedly. This has following defects:
Hi Ming
Thanks for your precious time for reviewing and comment.
On 03/08/2018 09:11 PM, Ming Lei wrote:
> On Thu, Mar 8, 2018 at 2:19 PM, Jianchao Wang
> wrote:
>> Currently, we use nvme_cancel_request to complete the request
>> forcedly. This has following defects:
>> - It is not safe to
Hi Christoph
Thanks for your precious time for reviewing this.
On 03/08/2018 03:57 PM, Christoph Hellwig wrote:
>> -u8 flags;
>> u16 status;
>> +unsigned long flags;
> Please align the field name like the others, though
Yes, I will change
Hi Christoph
Thanks for your precious time for reviewing this.
On 03/08/2018 03:57 PM, Christoph Hellwig wrote:
>> -u8 flags;
>> u16 status;
>> +unsigned long flags;
> Please align the field name like the others, though
Yes, I will change
Hi Martin
Can you take your precious time to review this ?
Thanks in advice.
Jianchao
On 03/03/2018 09:54 AM, Jianchao Wang wrote:
> In scsi core, __scsi_queue_insert should just put request back on
> the queue and retry using the same command as before. However, for
> blk-mq,
Hi Martin
Can you take your precious time to review this ?
Thanks in advice.
Jianchao
On 03/03/2018 09:54 AM, Jianchao Wang wrote:
> In scsi core, __scsi_queue_insert should just put request back on
> the queue and retry using the same command as before. However, for
> blk-mq,
Hi Bart
Thanks for your kindly response and directive.
On 03/03/2018 12:31 AM, Bart Van Assche wrote:
> On Fri, 2018-03-02 at 11:31 +0800, Jianchao Wang wrote:
>> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
>> index a86df9c..d2f1838 100644
>> --- a/drivers/scsi/scsi_lib.c
>>
Hi Bart
Thanks for your kindly response and directive.
On 03/03/2018 12:31 AM, Bart Van Assche wrote:
> On Fri, 2018-03-02 at 11:31 +0800, Jianchao Wang wrote:
>> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
>> index a86df9c..d2f1838 100644
>> --- a/drivers/scsi/scsi_lib.c
>>
Hi Christoph
Thanks for your kindly response and directive
On 03/01/2018 12:47 AM, Christoph Hellwig wrote:
> Note that we originally allocates irqs this way, and Keith changed
> it a while ago for good reasons. So I'd really like to see good
> reasons for moving away from this, and some
Hi Christoph
Thanks for your kindly response and directive
On 03/01/2018 12:47 AM, Christoph Hellwig wrote:
> Note that we originally allocates irqs this way, and Keith changed
> it a while ago for good reasons. So I'd really like to see good
> reasons for moving away from this, and some
Hi Keith
Thanks for your kindly directive and precious time for this.
On 03/01/2018 11:15 PM, Keith Busch wrote:
> On Thu, Mar 01, 2018 at 06:05:53PM +0800, jianchao.wang wrote:
>> When the adminq is free, ioq0 irq completion path has to invoke nvme_irq
>> twice, one for
Hi Keith
Thanks for your kindly directive and precious time for this.
On 03/01/2018 11:15 PM, Keith Busch wrote:
> On Thu, Mar 01, 2018 at 06:05:53PM +0800, jianchao.wang wrote:
>> When the adminq is free, ioq0 irq completion path has to invoke nvme_irq
>> twice, one for
Hi Andy
Thanks for your precious time for this and kindly reminding.
On 02/28/2018 11:59 PM, Andy Shevchenko wrote:
> On Wed, Feb 28, 2018 at 5:48 PM, Jianchao Wang
> wrote:
>> Currently, adminq and ioq0 share the same irq vector. This is
>> unfair for both amdinq
Hi Andy
Thanks for your precious time for this and kindly reminding.
On 02/28/2018 11:59 PM, Andy Shevchenko wrote:
> On Wed, Feb 28, 2018 at 5:48 PM, Jianchao Wang
> wrote:
>> Currently, adminq and ioq0 share the same irq vector. This is
>> unfair for both amdinq and ioq0.
>> - For adminq,
Hi martin
On 03/02/2018 09:44 AM, Martin K. Petersen wrote:
>> In scsi core, __scsi_queue_insert should just put request back on the
>> queue and retry using the same command as before. However, for blk-mq,
>> scsi_mq_requeue_cmd is employed here which will unprepare the
>> request. To align with
Hi martin
On 03/02/2018 09:44 AM, Martin K. Petersen wrote:
>> In scsi core, __scsi_queue_insert should just put request back on the
>> queue and retry using the same command as before. However, for blk-mq,
>> scsi_mq_requeue_cmd is employed here which will unprepare the
>> request. To align with
Hi martin
Thanks for your kindly response.
On 03/02/2018 09:43 AM, Martin K. Petersen wrote:
>
> Jianchao,
>
>> Yes, the block layer core guarantees that scsi_mq_get_budget() will be
>> called before scsi_queue_rq(). I think the full picture is as follows:
>
>> * Before scsi_queue_rq() calls
Hi martin
Thanks for your kindly response.
On 03/02/2018 09:43 AM, Martin K. Petersen wrote:
>
> Jianchao,
>
>> Yes, the block layer core guarantees that scsi_mq_get_budget() will be
>> called before scsi_queue_rq(). I think the full picture is as follows:
>
>> * Before scsi_queue_rq() calls
Hi Bart
Thanks for your precious time and detailed summary.
On 03/02/2018 01:43 AM, Bart Van Assche wrote:
> Yes, the block layer core guarantees that scsi_mq_get_budget() will be called
> before scsi_queue_rq(). I think the full picture is as follows:
> * Before scsi_queue_rq() calls
1 - 100 of 299 matches
Mail list logo