-by: Tyler Ramer
Reviewed-by: Balbir Singh
---
Changes since v2:
* Clean up commit message with comment from Balbir
* Still call nvme_kill_queues()
---
drivers/nvme/host/pci.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index
Keith,
Thanks for clarifying. I appreciate the comments.
.
Signed-off-by: Tyler Ramer
---
Changes since v1:
* Clean up commit message
* Remove nvme_kill_queues()
---
drivers/nvme/host/pci.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index c0808f9eb8ab..68d5fb880d80
> Setting the shutdown to true is
> usually just to get the queues flushed, but the nvme_kill_queues() that
> we call accomplishes the same thing.
The intention of this patch was to clean up another location where
nvme_dev_disable()
is called with shutdown == false, but the device is being
> What is the bad CSTS bit? CSTS.RDY?
The reset will be triggered by the result of nvme_should_reset():
1196 static bool nvme_should_reset(struct nvme_dev *dev, u32 csts)
1197 {
1198
1199 ⇥ /* If true, indicates loss of adapter communication, possibly by a
1200 ⇥* NVMe Subsystem
TING thanks to
the WARN_ON in nvme_reset_work.
On Thu, Oct 3, 2019 at 3:13 PM Tyler Ramer wrote:
>
> Always shutdown the controller when nvme_remove_dead_controller is
> reached.
>
> It's possible for nvme_remove_dead_controller to be called as part of a
> failed reset, when th
.
Signed-off-by: Tyler Ramer
---
drivers/nvme/host/pci.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index c0808f9eb8ab..c3f5ba22c625 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2509,7 +2509,7 @@ static
7 matches
Mail list logo