Re: [PATCH 5/5 v2] nvme: LightNVM support

2015-04-16 Thread Matias Bjorling

Den 16-04-2015 kl. 16:55 skrev Keith Busch:

On Wed, 15 Apr 2015, Matias Bjørling wrote:

@@ -2316,7 +2686,9 @@ static int nvme_dev_add(struct nvme_dev *dev)
struct nvme_id_ctrl *ctrl;
void *mem;
dma_addr_t dma_addr;
-int shift = NVME_CAP_MPSMIN(readq(>bar->cap)) + 12;
+u64 cap = readq(>bar->cap);
+int shift = NVME_CAP_MPSMIN(cap) + 12;
+int nvm_cmdset = NVME_CAP_NVM(cap);


The controller capabilities' command sets supported used here is the
right way to key off on support for this new command set, IMHO, but I do
not see in this patch the command set being selected when the controller
is enabled


I'll get that added. Wouldn't it just be that the command set always is 
selected? A NVMe controller can expose both normal and lightnvm 
namespaces. So we would always enable it, if CAP bit is set.




Also if we're going this route, I think we need to define this reserved
bit in the spec, but I'm not sure how to help with that.


Agree, we'll see how it can be proposed.




@@ -2332,6 +2704,7 @@ static int nvme_dev_add(struct nvme_dev *dev)
ctrl = mem;
nn = le32_to_cpup(>nn);
dev->oncs = le16_to_cpup(>oncs);
+dev->oacs = le16_to_cpup(>oacs);


I don't find OACS used anywhere in the rest of the patch. I think this
must be left over from v1.


Oops, yes, that's just a left over.



Otherwise it looks pretty good to me, but I think it would be cleaner if
the lightnvm stuff is not mixed in the same file with the standard nvme
command set. We might end up splitting nvme-core in the future anyway
for command sets and transports.


Will do. Thanks.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: [PATCH 5/5 v2] nvme: LightNVM support

2015-04-16 Thread Keith Busch

On Thu, 16 Apr 2015, James R. Bergsten wrote:

My two cents worth is that it's (always) better to put ALL the commands into
one place so that the entire set can be viewed at once and thus avoid
inadvertent overloading of an opcode.  Otherwise you don't know what you
don't know.


Yes, but these are two different command sets.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: [PATCH 5/5 v2] nvme: LightNVM support

2015-04-16 Thread James R. Bergsten
My two cents worth is that it's (always) better to put ALL the commands into 
one place so that the entire set can be viewed at once and thus avoid 
inadvertent overloading of an opcode.  Otherwise you don't know what you don't 
know.

-Original Message-
From: Linux-nvme [mailto:linux-nvme-boun...@lists.infradead.org] On Behalf Of 
Keith Busch
Sent: Thursday, April 16, 2015 8:52 AM
To: Javier González
Cc: h...@infradead.org; Matias Bjørling; ax...@fb.com; 
linux-kernel@vger.kernel.org; linux-n...@lists.infradead.org; Keith Busch; 
linux-fsde...@vger.kernel.org
Subject: Re: [PATCH 5/5 v2] nvme: LightNVM support

On Thu, 16 Apr 2015, Javier González wrote:
>> On 16 Apr 2015, at 16:55, Keith Busch  wrote:
>>
>> Otherwise it looks pretty good to me, but I think it would be cleaner 
>> if the lightnvm stuff is not mixed in the same file with the standard 
>> nvme command set. We might end up splitting nvme-core in the future 
>> anyway for command sets and transports.
>
> Would you be ok with having nvme-lightnvm for LightNVM specific 
> commands?

Sounds good to me, but I don't really have a dog in this fight. :)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 5/5 v2] nvme: LightNVM support

2015-04-16 Thread Keith Busch

On Thu, 16 Apr 2015, Javier González wrote:

On 16 Apr 2015, at 16:55, Keith Busch  wrote:

Otherwise it looks pretty good to me, but I think it would be cleaner if
the lightnvm stuff is not mixed in the same file with the standard nvme
command set. We might end up splitting nvme-core in the future anyway
for command sets and transports.


Would you be ok with having nvme-lightnvm for LightNVM specific
commands?


Sounds good to me, but I don't really have a dog in this fight. :)

Re: [PATCH 5/5 v2] nvme: LightNVM support

2015-04-16 Thread Javier González
Hi,

> On 16 Apr 2015, at 16:55, Keith Busch  wrote:
> 
> On Wed, 15 Apr 2015, Matias Bjørling wrote:
>> @@ -2316,7 +2686,9 @@ static int nvme_dev_add(struct nvme_dev *dev)
>>  struct nvme_id_ctrl *ctrl;
>>  void *mem;
>>  dma_addr_t dma_addr;
>> -int shift = NVME_CAP_MPSMIN(readq(>bar->cap)) + 12;
>> +u64 cap = readq(>bar->cap);
>> +int shift = NVME_CAP_MPSMIN(cap) + 12;
>> +int nvm_cmdset = NVME_CAP_NVM(cap);
> 
> The controller capabilities' command sets supported used here is the
> right way to key off on support for this new command set, IMHO, but I do
> not see in this patch the command set being selected when the controller
> is enabled
> 
> Also if we're going this route, I think we need to define this reserved
> bit in the spec, but I'm not sure how to help with that.
> 
>> @@ -2332,6 +2704,7 @@ static int nvme_dev_add(struct nvme_dev *dev)
>>  ctrl = mem;
>>  nn = le32_to_cpup(>nn);
>>  dev->oncs = le16_to_cpup(>oncs);
>> +dev->oacs = le16_to_cpup(>oacs);
> 
> I don't find OACS used anywhere in the rest of the patch. I think this
> must be left over from v1.
> 
> Otherwise it looks pretty good to me, but I think it would be cleaner if
> the lightnvm stuff is not mixed in the same file with the standard nvme
> command set. We might end up splitting nvme-core in the future anyway
> for command sets and transports.

Would you be ok with having nvme-lightnvm for LightNVM specific
commands?

Javier.



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [PATCH 5/5 v2] nvme: LightNVM support

2015-04-16 Thread Keith Busch

On Wed, 15 Apr 2015, Matias Bjørling wrote:

@@ -2316,7 +2686,9 @@ static int nvme_dev_add(struct nvme_dev *dev)
struct nvme_id_ctrl *ctrl;
void *mem;
dma_addr_t dma_addr;
-   int shift = NVME_CAP_MPSMIN(readq(>bar->cap)) + 12;
+   u64 cap = readq(>bar->cap);
+   int shift = NVME_CAP_MPSMIN(cap) + 12;
+   int nvm_cmdset = NVME_CAP_NVM(cap);


The controller capabilities' command sets supported used here is the
right way to key off on support for this new command set, IMHO, but I do
not see in this patch the command set being selected when the controller
is enabled

Also if we're going this route, I think we need to define this reserved
bit in the spec, but I'm not sure how to help with that.


@@ -2332,6 +2704,7 @@ static int nvme_dev_add(struct nvme_dev *dev)
ctrl = mem;
nn = le32_to_cpup(>nn);
dev->oncs = le16_to_cpup(>oncs);
+   dev->oacs = le16_to_cpup(>oacs);


I don't find OACS used anywhere in the rest of the patch. I think this
must be left over from v1.

Otherwise it looks pretty good to me, but I think it would be cleaner if
the lightnvm stuff is not mixed in the same file with the standard nvme
command set. We might end up splitting nvme-core in the future anyway
for command sets and transports.

Re: [PATCH 5/5 v2] nvme: LightNVM support

2015-04-16 Thread Keith Busch

On Wed, 15 Apr 2015, Matias Bjørling wrote:

@@ -2316,7 +2686,9 @@ static int nvme_dev_add(struct nvme_dev *dev)
struct nvme_id_ctrl *ctrl;
void *mem;
dma_addr_t dma_addr;
-   int shift = NVME_CAP_MPSMIN(readq(dev-bar-cap)) + 12;
+   u64 cap = readq(dev-bar-cap);
+   int shift = NVME_CAP_MPSMIN(cap) + 12;
+   int nvm_cmdset = NVME_CAP_NVM(cap);


The controller capabilities' command sets supported used here is the
right way to key off on support for this new command set, IMHO, but I do
not see in this patch the command set being selected when the controller
is enabled

Also if we're going this route, I think we need to define this reserved
bit in the spec, but I'm not sure how to help with that.


@@ -2332,6 +2704,7 @@ static int nvme_dev_add(struct nvme_dev *dev)
ctrl = mem;
nn = le32_to_cpup(ctrl-nn);
dev-oncs = le16_to_cpup(ctrl-oncs);
+   dev-oacs = le16_to_cpup(ctrl-oacs);


I don't find OACS used anywhere in the rest of the patch. I think this
must be left over from v1.

Otherwise it looks pretty good to me, but I think it would be cleaner if
the lightnvm stuff is not mixed in the same file with the standard nvme
command set. We might end up splitting nvme-core in the future anyway
for command sets and transports.

Re: [PATCH 5/5 v2] nvme: LightNVM support

2015-04-16 Thread Javier González
Hi,

 On 16 Apr 2015, at 16:55, Keith Busch keith.bu...@intel.com wrote:
 
 On Wed, 15 Apr 2015, Matias Bjørling wrote:
 @@ -2316,7 +2686,9 @@ static int nvme_dev_add(struct nvme_dev *dev)
  struct nvme_id_ctrl *ctrl;
  void *mem;
  dma_addr_t dma_addr;
 -int shift = NVME_CAP_MPSMIN(readq(dev-bar-cap)) + 12;
 +u64 cap = readq(dev-bar-cap);
 +int shift = NVME_CAP_MPSMIN(cap) + 12;
 +int nvm_cmdset = NVME_CAP_NVM(cap);
 
 The controller capabilities' command sets supported used here is the
 right way to key off on support for this new command set, IMHO, but I do
 not see in this patch the command set being selected when the controller
 is enabled
 
 Also if we're going this route, I think we need to define this reserved
 bit in the spec, but I'm not sure how to help with that.
 
 @@ -2332,6 +2704,7 @@ static int nvme_dev_add(struct nvme_dev *dev)
  ctrl = mem;
  nn = le32_to_cpup(ctrl-nn);
  dev-oncs = le16_to_cpup(ctrl-oncs);
 +dev-oacs = le16_to_cpup(ctrl-oacs);
 
 I don't find OACS used anywhere in the rest of the patch. I think this
 must be left over from v1.
 
 Otherwise it looks pretty good to me, but I think it would be cleaner if
 the lightnvm stuff is not mixed in the same file with the standard nvme
 command set. We might end up splitting nvme-core in the future anyway
 for command sets and transports.

Would you be ok with having nvme-lightnvm for LightNVM specific
commands?

Javier.



signature.asc
Description: Message signed with OpenPGP using GPGMail


RE: [PATCH 5/5 v2] nvme: LightNVM support

2015-04-16 Thread Keith Busch

On Thu, 16 Apr 2015, James R. Bergsten wrote:

My two cents worth is that it's (always) better to put ALL the commands into
one place so that the entire set can be viewed at once and thus avoid
inadvertent overloading of an opcode.  Otherwise you don't know what you
don't know.


Yes, but these are two different command sets.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 5/5 v2] nvme: LightNVM support

2015-04-16 Thread Keith Busch

On Thu, 16 Apr 2015, Javier González wrote:

On 16 Apr 2015, at 16:55, Keith Busch keith.bu...@intel.com wrote:

Otherwise it looks pretty good to me, but I think it would be cleaner if
the lightnvm stuff is not mixed in the same file with the standard nvme
command set. We might end up splitting nvme-core in the future anyway
for command sets and transports.


Would you be ok with having nvme-lightnvm for LightNVM specific
commands?


Sounds good to me, but I don't really have a dog in this fight. :)

RE: [PATCH 5/5 v2] nvme: LightNVM support

2015-04-16 Thread James R. Bergsten
My two cents worth is that it's (always) better to put ALL the commands into 
one place so that the entire set can be viewed at once and thus avoid 
inadvertent overloading of an opcode.  Otherwise you don't know what you don't 
know.

-Original Message-
From: Linux-nvme [mailto:linux-nvme-boun...@lists.infradead.org] On Behalf Of 
Keith Busch
Sent: Thursday, April 16, 2015 8:52 AM
To: Javier González
Cc: h...@infradead.org; Matias Bjørling; ax...@fb.com; 
linux-kernel@vger.kernel.org; linux-n...@lists.infradead.org; Keith Busch; 
linux-fsde...@vger.kernel.org
Subject: Re: [PATCH 5/5 v2] nvme: LightNVM support

On Thu, 16 Apr 2015, Javier González wrote:
 On 16 Apr 2015, at 16:55, Keith Busch keith.bu...@intel.com wrote:

 Otherwise it looks pretty good to me, but I think it would be cleaner 
 if the lightnvm stuff is not mixed in the same file with the standard 
 nvme command set. We might end up splitting nvme-core in the future 
 anyway for command sets and transports.

 Would you be ok with having nvme-lightnvm for LightNVM specific 
 commands?

Sounds good to me, but I don't really have a dog in this fight. :)

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 5/5 v2] nvme: LightNVM support

2015-04-16 Thread Matias Bjorling

Den 16-04-2015 kl. 16:55 skrev Keith Busch:

On Wed, 15 Apr 2015, Matias Bjørling wrote:

@@ -2316,7 +2686,9 @@ static int nvme_dev_add(struct nvme_dev *dev)
struct nvme_id_ctrl *ctrl;
void *mem;
dma_addr_t dma_addr;
-int shift = NVME_CAP_MPSMIN(readq(dev-bar-cap)) + 12;
+u64 cap = readq(dev-bar-cap);
+int shift = NVME_CAP_MPSMIN(cap) + 12;
+int nvm_cmdset = NVME_CAP_NVM(cap);


The controller capabilities' command sets supported used here is the
right way to key off on support for this new command set, IMHO, but I do
not see in this patch the command set being selected when the controller
is enabled


I'll get that added. Wouldn't it just be that the command set always is 
selected? A NVMe controller can expose both normal and lightnvm 
namespaces. So we would always enable it, if CAP bit is set.




Also if we're going this route, I think we need to define this reserved
bit in the spec, but I'm not sure how to help with that.


Agree, we'll see how it can be proposed.




@@ -2332,6 +2704,7 @@ static int nvme_dev_add(struct nvme_dev *dev)
ctrl = mem;
nn = le32_to_cpup(ctrl-nn);
dev-oncs = le16_to_cpup(ctrl-oncs);
+dev-oacs = le16_to_cpup(ctrl-oacs);


I don't find OACS used anywhere in the rest of the patch. I think this
must be left over from v1.


Oops, yes, that's just a left over.



Otherwise it looks pretty good to me, but I think it would be cleaner if
the lightnvm stuff is not mixed in the same file with the standard nvme
command set. We might end up splitting nvme-core in the future anyway
for command sets and transports.


Will do. Thanks.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 5/5 v2] nvme: LightNVM support

2015-04-15 Thread Matias Bjørling
The first generation of Open-Channel SSDs will be based on NVMe. The
integration requires that a NVMe device exposes itself as a LightNVM
device. The way this is done currently is by hooking into the
Controller Capabilities (CAP register) and a bit in NSFEAT for each
namespace.

After detection, vendor specific codes are used to identify the device
and enumerate supported features.

Signed-off-by: Matias Bjørling 
---
 drivers/block/nvme-core.c | 380 +-
 include/linux/nvme.h  |   2 +
 include/uapi/linux/nvme.h | 116 ++
 3 files changed, 497 insertions(+), 1 deletion(-)

diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
index e23be20..cbbf728 100644
--- a/drivers/block/nvme-core.c
+++ b/drivers/block/nvme-core.c
@@ -39,6 +39,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -134,6 +135,8 @@ static inline void _nvme_check_size(void)
BUILD_BUG_ON(sizeof(struct nvme_id_ns) != 4096);
BUILD_BUG_ON(sizeof(struct nvme_lba_range_type) != 64);
BUILD_BUG_ON(sizeof(struct nvme_smart_log) != 512);
+   BUILD_BUG_ON(sizeof(struct nvme_lnvm_hb_write_command) != 64);
+   BUILD_BUG_ON(sizeof(struct nvme_lnvm_l2ptbl_command) != 64);
 }
 
 typedef void (*nvme_completion_fn)(struct nvme_queue *, void *,
@@ -591,6 +594,30 @@ static void nvme_init_integrity(struct nvme_ns *ns)
 }
 #endif
 
+static struct nvme_iod *nvme_get_dma_iod(struct nvme_dev *dev, void *buf,
+   unsigned length)
+{
+   struct scatterlist *sg;
+   struct nvme_iod *iod;
+   struct device *ddev = >pci_dev->dev;
+
+   if (!length || length > INT_MAX - PAGE_SIZE)
+   return ERR_PTR(-EINVAL);
+
+   iod = __nvme_alloc_iod(1, length, dev, 0, GFP_KERNEL);
+   if (!iod)
+   goto err;
+
+   sg = iod->sg;
+   sg_init_one(sg, buf, length);
+   iod->nents = 1;
+   dma_map_sg(ddev, sg, iod->nents, DMA_FROM_DEVICE);
+
+   return iod;
+err:
+   return ERR_PTR(-ENOMEM);
+}
+
 static void req_completion(struct nvme_queue *nvmeq, void *ctx,
struct nvme_completion *cqe)
 {
@@ -760,6 +787,46 @@ static void nvme_submit_flush(struct nvme_queue *nvmeq, 
struct nvme_ns *ns,
writel(nvmeq->sq_tail, nvmeq->q_db);
 }
 
+static int nvme_submit_lnvm_iod(struct nvme_queue *nvmeq, struct nvme_iod *iod,
+   struct nvme_ns *ns)
+{
+   struct request *req = iod_get_private(iod);
+   struct nvme_command *cmnd;
+   u16 control = 0;
+   u32 dsmgmt = 0;
+
+   if (req->cmd_flags & REQ_FUA)
+   control |= NVME_RW_FUA;
+   if (req->cmd_flags & (REQ_FAILFAST_DEV | REQ_RAHEAD))
+   control |= NVME_RW_LR;
+
+   if (req->cmd_flags & REQ_RAHEAD)
+   dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
+
+   cmnd = >sq_cmds[nvmeq->sq_tail];
+   memset(cmnd, 0, sizeof(*cmnd));
+
+   cmnd->lnvm_hb_w.opcode = (rq_data_dir(req) ?
+   lnvm_cmd_hybrid_write : lnvm_cmd_hybrid_read);
+   cmnd->lnvm_hb_w.command_id = req->tag;
+   cmnd->lnvm_hb_w.nsid = cpu_to_le32(ns->ns_id);
+   cmnd->lnvm_hb_w.prp1 = cpu_to_le64(sg_dma_address(iod->sg));
+   cmnd->lnvm_hb_w.prp2 = cpu_to_le64(iod->first_dma);
+   cmnd->lnvm_hb_w.slba = cpu_to_le64(nvme_block_nr(ns, blk_rq_pos(req)));
+   cmnd->lnvm_hb_w.length = cpu_to_le16(
+   (blk_rq_bytes(req) >> ns->lba_shift) - 1);
+   cmnd->lnvm_hb_w.control = cpu_to_le16(control);
+   cmnd->lnvm_hb_w.dsmgmt = cpu_to_le32(dsmgmt);
+   cmnd->lnvm_hb_w.phys_addr =
+   cpu_to_le64(nvme_block_nr(ns, req->phys_sector));
+
+   if (++nvmeq->sq_tail == nvmeq->q_depth)
+   nvmeq->sq_tail = 0;
+   writel(nvmeq->sq_tail, nvmeq->q_db);
+
+   return 0;
+}
+
 static int nvme_submit_iod(struct nvme_queue *nvmeq, struct nvme_iod *iod,
struct nvme_ns *ns)
 {
@@ -895,6 +962,8 @@ static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
nvme_submit_discard(nvmeq, ns, req, iod);
else if (req->cmd_flags & REQ_FLUSH)
nvme_submit_flush(nvmeq, ns, req->tag);
+   else if (req->cmd_flags & REQ_NVM_MAPPED)
+   nvme_submit_lnvm_iod(nvmeq, iod, ns);
else
nvme_submit_iod(nvmeq, iod, ns);
 
@@ -1156,6 +1225,84 @@ static int adapter_delete_sq(struct nvme_dev *dev, u16 
sqid)
return adapter_delete_queue(dev, nvme_admin_delete_sq, sqid);
 }
 
+int nvme_nvm_identify_cmd(struct nvme_dev *dev, u32 chnl_off,
+   dma_addr_t dma_addr)
+{
+   struct nvme_command c;
+
+   memset(, 0, sizeof(c));
+   c.common.opcode = lnvm_admin_identify;
+   c.common.nsid = 

[PATCH 5/5 v2] nvme: LightNVM support

2015-04-15 Thread Matias Bjørling
The first generation of Open-Channel SSDs will be based on NVMe. The
integration requires that a NVMe device exposes itself as a LightNVM
device. The way this is done currently is by hooking into the
Controller Capabilities (CAP register) and a bit in NSFEAT for each
namespace.

After detection, vendor specific codes are used to identify the device
and enumerate supported features.

Signed-off-by: Matias Bjørling m...@bjorling.me
---
 drivers/block/nvme-core.c | 380 +-
 include/linux/nvme.h  |   2 +
 include/uapi/linux/nvme.h | 116 ++
 3 files changed, 497 insertions(+), 1 deletion(-)

diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
index e23be20..cbbf728 100644
--- a/drivers/block/nvme-core.c
+++ b/drivers/block/nvme-core.c
@@ -39,6 +39,7 @@
 #include linux/slab.h
 #include linux/t10-pi.h
 #include linux/types.h
+#include linux/lightnvm.h
 #include scsi/sg.h
 #include asm-generic/io-64-nonatomic-lo-hi.h
 
@@ -134,6 +135,8 @@ static inline void _nvme_check_size(void)
BUILD_BUG_ON(sizeof(struct nvme_id_ns) != 4096);
BUILD_BUG_ON(sizeof(struct nvme_lba_range_type) != 64);
BUILD_BUG_ON(sizeof(struct nvme_smart_log) != 512);
+   BUILD_BUG_ON(sizeof(struct nvme_lnvm_hb_write_command) != 64);
+   BUILD_BUG_ON(sizeof(struct nvme_lnvm_l2ptbl_command) != 64);
 }
 
 typedef void (*nvme_completion_fn)(struct nvme_queue *, void *,
@@ -591,6 +594,30 @@ static void nvme_init_integrity(struct nvme_ns *ns)
 }
 #endif
 
+static struct nvme_iod *nvme_get_dma_iod(struct nvme_dev *dev, void *buf,
+   unsigned length)
+{
+   struct scatterlist *sg;
+   struct nvme_iod *iod;
+   struct device *ddev = dev-pci_dev-dev;
+
+   if (!length || length  INT_MAX - PAGE_SIZE)
+   return ERR_PTR(-EINVAL);
+
+   iod = __nvme_alloc_iod(1, length, dev, 0, GFP_KERNEL);
+   if (!iod)
+   goto err;
+
+   sg = iod-sg;
+   sg_init_one(sg, buf, length);
+   iod-nents = 1;
+   dma_map_sg(ddev, sg, iod-nents, DMA_FROM_DEVICE);
+
+   return iod;
+err:
+   return ERR_PTR(-ENOMEM);
+}
+
 static void req_completion(struct nvme_queue *nvmeq, void *ctx,
struct nvme_completion *cqe)
 {
@@ -760,6 +787,46 @@ static void nvme_submit_flush(struct nvme_queue *nvmeq, 
struct nvme_ns *ns,
writel(nvmeq-sq_tail, nvmeq-q_db);
 }
 
+static int nvme_submit_lnvm_iod(struct nvme_queue *nvmeq, struct nvme_iod *iod,
+   struct nvme_ns *ns)
+{
+   struct request *req = iod_get_private(iod);
+   struct nvme_command *cmnd;
+   u16 control = 0;
+   u32 dsmgmt = 0;
+
+   if (req-cmd_flags  REQ_FUA)
+   control |= NVME_RW_FUA;
+   if (req-cmd_flags  (REQ_FAILFAST_DEV | REQ_RAHEAD))
+   control |= NVME_RW_LR;
+
+   if (req-cmd_flags  REQ_RAHEAD)
+   dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
+
+   cmnd = nvmeq-sq_cmds[nvmeq-sq_tail];
+   memset(cmnd, 0, sizeof(*cmnd));
+
+   cmnd-lnvm_hb_w.opcode = (rq_data_dir(req) ?
+   lnvm_cmd_hybrid_write : lnvm_cmd_hybrid_read);
+   cmnd-lnvm_hb_w.command_id = req-tag;
+   cmnd-lnvm_hb_w.nsid = cpu_to_le32(ns-ns_id);
+   cmnd-lnvm_hb_w.prp1 = cpu_to_le64(sg_dma_address(iod-sg));
+   cmnd-lnvm_hb_w.prp2 = cpu_to_le64(iod-first_dma);
+   cmnd-lnvm_hb_w.slba = cpu_to_le64(nvme_block_nr(ns, blk_rq_pos(req)));
+   cmnd-lnvm_hb_w.length = cpu_to_le16(
+   (blk_rq_bytes(req)  ns-lba_shift) - 1);
+   cmnd-lnvm_hb_w.control = cpu_to_le16(control);
+   cmnd-lnvm_hb_w.dsmgmt = cpu_to_le32(dsmgmt);
+   cmnd-lnvm_hb_w.phys_addr =
+   cpu_to_le64(nvme_block_nr(ns, req-phys_sector));
+
+   if (++nvmeq-sq_tail == nvmeq-q_depth)
+   nvmeq-sq_tail = 0;
+   writel(nvmeq-sq_tail, nvmeq-q_db);
+
+   return 0;
+}
+
 static int nvme_submit_iod(struct nvme_queue *nvmeq, struct nvme_iod *iod,
struct nvme_ns *ns)
 {
@@ -895,6 +962,8 @@ static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
nvme_submit_discard(nvmeq, ns, req, iod);
else if (req-cmd_flags  REQ_FLUSH)
nvme_submit_flush(nvmeq, ns, req-tag);
+   else if (req-cmd_flags  REQ_NVM_MAPPED)
+   nvme_submit_lnvm_iod(nvmeq, iod, ns);
else
nvme_submit_iod(nvmeq, iod, ns);
 
@@ -1156,6 +1225,84 @@ static int adapter_delete_sq(struct nvme_dev *dev, u16 
sqid)
return adapter_delete_queue(dev, nvme_admin_delete_sq, sqid);
 }
 
+int nvme_nvm_identify_cmd(struct nvme_dev *dev, u32 chnl_off,
+   dma_addr_t dma_addr)
+{
+   struct nvme_command c;
+
+   memset(c, 0, sizeof(c));
+