Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-23 Thread Bjorn Helgaas
On Fri, Mar 23, 2018 at 03:59:14PM -0600, Logan Gunthorpe wrote:
> On 23/03/18 03:50 PM, Bjorn Helgaas wrote:
> > Popping way up the stack, my original point was that I'm trying to
> > remove restrictions on what devices can participate in
> > peer-to-peer DMA.  I think it's fairly clear that in conventional
> > PCI, any devices in the same PCI hierarchy, i.e., below the same
> > host-to-PCI bridge, should be able to DMA to each other.
> 
> Yup, we are working on this.
> 
> > The routing behavior of PCIe is supposed to be compatible with
> > conventional PCI, and I would argue that this effectively requires
> > multi-function PCIe devices to have the internal routing required
> > to avoid the route-to-self issue.
> 
> That would be very nice but many devices do not support the internal
> route. We've had to work around this in the past and as I mentioned
> earlier that NVMe devices have a flag indicating support. However,
> if a device wants to be involved in P2P it must support it and we
> can exclude devices that don't support it by simply not enabling
> their drivers.

Do you think these devices that don't support internal DMA between
functions are within spec, or should we handle them as exceptions,
e.g., via quirks?

If NVMe defines a flag indicating peer-to-peer support, that would
suggest to me that these devices are within spec.

I looked up the CMBSZ register you mentioned (NVMe 1.3a, sec 3.1.12).
You must be referring to the WDS, RDS, LISTS, CQS, and SQS bits.  If
WDS is set, the controller supports having Write-related data and
metadata in the Controller Memory Buffer.  That would mean the driver
could put certain queues in controller memory instead of in host
memory.  The controller could then read the queue from its own
internal memory rather than issuing a PCIe transaction to read it from
host memory.

That makes sense to me, but I don't see the connection to
peer-to-peer.  There's no multi-function device in this picture, so
it's not about internal DMA between functions.

WDS, etc., tell us about capabilities of the controller.  If WDS is
set, the CPU (or a peer PCIe device) can write things to controller
memory.  If it is clear, neither the CPU nor a peer device can put
things there.  So it doesn't seem to tell us anything about
peer-to-peer specifically.  It looks like information needed by the
NVMe driver, but not by the PCI core.

Bjorn


[PATCH 2/3] nvme-pci: Remove unused queue parameter

2018-03-23 Thread Keith Busch
All nvme queue memory is allocated up front. We don't take the node
into consideration when creating queues anymore, so removing the unused
parameter.

Signed-off-by: Keith Busch 
---
 drivers/nvme/host/pci.c | 10 +++---
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index cef5ce851a92..632166f7d8f2 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1379,8 +1379,7 @@ static int nvme_alloc_sq_cmds(struct nvme_dev *dev, 
struct nvme_queue *nvmeq,
return 0;
 }
 
-static int nvme_alloc_queue(struct nvme_dev *dev, int qid,
-   int depth, int node)
+static int nvme_alloc_queue(struct nvme_dev *dev, int qid, int depth)
 {
struct nvme_queue *nvmeq = &dev->queues[qid];
 
@@ -1595,8 +1594,7 @@ static int nvme_pci_configure_admin_queue(struct nvme_dev 
*dev)
if (result < 0)
return result;
 
-   result = nvme_alloc_queue(dev, 0, NVME_AQ_DEPTH,
-   dev_to_node(dev->dev));
+   result = nvme_alloc_queue(dev, 0, NVME_AQ_DEPTH);
if (result)
return result;
 
@@ -1629,9 +1627,7 @@ static int nvme_create_io_queues(struct nvme_dev *dev)
int ret = 0;
 
for (i = dev->ctrl.queue_count; i <= dev->max_qid; i++) {
-   /* vector == qid - 1, match nvme_create_queue */
-   if (nvme_alloc_queue(dev, i, dev->q_depth,
-pci_irq_get_node(to_pci_dev(dev->dev), i - 1))) {
+   if (nvme_alloc_queue(dev, i, dev->q_depth)) {
ret = -ENOMEM;
break;
}
-- 
2.14.3



[PATCH 3/3] nvme-pci: Separate IO and admin queue IRQ vectors

2018-03-23 Thread Keith Busch
From: Jianchao Wang 

The admin and first IO queues shared the first irq vector, which has an
affinity mask including cpu0. If a system allows cpu0 to be offlined,
the admin queue may not be usable if no other CPUs in the affinity mask
are online. This is a problem since unlike IO queues, there is only
one admin queue that always needs to be usable.

To fix, this patch allocates one pre_vector for the admin queue that
is assigned all CPUs, so will always be accessible. The IO queues are
assigned the remaining managed vectors.

In case a controller has only one interrupt vector available, the admin
and IO queues will share the pre_vector with all CPUs assigned.

Signed-off-by: Jianchao Wang 
Reviewed-by: Ming Lei 
[changelog, code comments, merge, and blk-mq pci vector offset]
Signed-off-by: Keith Busch 
---
 drivers/nvme/host/pci.c | 27 +--
 1 file changed, 21 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 632166f7d8f2..7b31bc01df6c 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -84,6 +84,7 @@ struct nvme_dev {
struct dma_pool *prp_small_pool;
unsigned online_queues;
unsigned max_qid;
+   unsigned int num_vecs;
int q_depth;
u32 db_stride;
void __iomem *bar;
@@ -139,6 +140,16 @@ static inline struct nvme_dev *to_nvme_dev(struct 
nvme_ctrl *ctrl)
return container_of(ctrl, struct nvme_dev, ctrl);
 }
 
+static inline unsigned int nvme_ioq_vector(struct nvme_dev *dev,
+   unsigned int qid)
+{
+   /*
+* A queue's vector matches the queue identifier unless the controller
+* has only one vector available.
+*/
+   return (dev->num_vecs == 1) ? 0 : qid;
+}
+
 /*
  * An NVM Express queue.  Each device has at least two (one for admin
  * commands and one for I/O commands).
@@ -414,7 +425,8 @@ static int nvme_pci_map_queues(struct blk_mq_tag_set *set)
 {
struct nvme_dev *dev = set->driver_data;
 
-   return blk_mq_pci_map_queues(set, to_pci_dev(dev->dev));
+   return __blk_mq_pci_map_queues(set, to_pci_dev(dev->dev),
+  dev->num_vecs > 1);
 }
 
 /**
@@ -1455,7 +1467,7 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, 
int qid)
nvmeq->sq_cmds_io = dev->cmb + offset;
}
 
-   nvmeq->cq_vector = qid - 1;
+   nvmeq->cq_vector = nvme_ioq_vector(dev, qid);
result = adapter_alloc_cq(dev, qid, nvmeq);
if (result < 0)
goto release_vector;
@@ -1908,6 +1920,8 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
struct pci_dev *pdev = to_pci_dev(dev->dev);
int result, nr_io_queues;
unsigned long size;
+   struct irq_affinity affd = {.pre_vectors = 1};
+   int ret;
 
nr_io_queues = num_present_cpus();
result = nvme_set_queue_count(&dev->ctrl, &nr_io_queues);
@@ -1944,11 +1958,12 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
 * setting up the full range we need.
 */
pci_free_irq_vectors(pdev);
-   nr_io_queues = pci_alloc_irq_vectors(pdev, 1, nr_io_queues,
-   PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY);
-   if (nr_io_queues <= 0)
+   ret = pci_alloc_irq_vectors_affinity(pdev, 1, (nr_io_queues + 1),
+   PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, &affd);
+   if (ret <= 0)
return -EIO;
-   dev->max_qid = nr_io_queues;
+   dev->num_vecs = ret;
+   dev->max_qid = max(ret - 1, 1);
 
/*
 * Should investigate if there's a performance win from allocating
-- 
2.14.3



[PATCH 1/3] blk-mq: Allow PCI vector offset for mapping queues

2018-03-23 Thread Keith Busch
The PCI interrupt vectors intended to be associated with a queue may
not start at 0. This patch adds an offset parameter so blk-mq may find
the intended affinity mask. The default value is 0 so existing drivers
that don't care about this parameter don't need to change.

Signed-off-by: Keith Busch 
---
 block/blk-mq-pci.c | 12 ++--
 include/linux/blk-mq-pci.h |  2 ++
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/block/blk-mq-pci.c b/block/blk-mq-pci.c
index 76944e3271bf..1040a7705c13 100644
--- a/block/blk-mq-pci.c
+++ b/block/blk-mq-pci.c
@@ -21,6 +21,7 @@
  * blk_mq_pci_map_queues - provide a default queue mapping for PCI device
  * @set:   tagset to provide the mapping for
  * @pdev:  PCI device associated with @set.
+ * @offset:PCI irq starting vector offset
  *
  * This function assumes the PCI device @pdev has at least as many available
  * interrupt vectors as @set has queues.  It will then query the vector
@@ -28,13 +29,14 @@
  * that maps a queue to the CPUs that have irq affinity for the corresponding
  * vector.
  */
-int blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct pci_dev *pdev)
+int __blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct pci_dev *pdev,
+   int offset)
 {
const struct cpumask *mask;
unsigned int queue, cpu;
 
for (queue = 0; queue < set->nr_hw_queues; queue++) {
-   mask = pci_irq_get_affinity(pdev, queue);
+   mask = pci_irq_get_affinity(pdev, queue + offset);
if (!mask)
goto fallback;
 
@@ -50,4 +52,10 @@ int blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct 
pci_dev *pdev)
set->mq_map[cpu] = 0;
return 0;
 }
+EXPORT_SYMBOL_GPL(__blk_mq_pci_map_queues);
+
+int blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct pci_dev *pdev)
+{
+   return __blk_mq_pci_map_queues(set, pdev, 0);
+}
 EXPORT_SYMBOL_GPL(blk_mq_pci_map_queues);
diff --git a/include/linux/blk-mq-pci.h b/include/linux/blk-mq-pci.h
index 6338551e0fb9..5a92ecdbd78e 100644
--- a/include/linux/blk-mq-pci.h
+++ b/include/linux/blk-mq-pci.h
@@ -5,6 +5,8 @@
 struct blk_mq_tag_set;
 struct pci_dev;
 
+int __blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct pci_dev *pdev,
+   int offset);
 int blk_mq_pci_map_queues(struct blk_mq_tag_set *set, struct pci_dev *pdev);
 
 #endif /* _LINUX_BLK_MQ_PCI_H */
-- 
2.14.3



Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-23 Thread Logan Gunthorpe


On 23/03/18 03:50 PM, Bjorn Helgaas wrote:
> Popping way up the stack, my original point was that I'm trying to
> remove restrictions on what devices can participate in peer-to-peer
> DMA.  I think it's fairly clear that in conventional PCI, any devices
> in the same PCI hierarchy, i.e., below the same host-to-PCI bridge,
> should be able to DMA to each other.

Yup, we are working on this.

> The routing behavior of PCIe is supposed to be compatible with
> conventional PCI, and I would argue that this effectively requires
> multi-function PCIe devices to have the internal routing required to
> avoid the route-to-self issue.

That would be very nice but many devices do not support the internal
route. We've had to work around this in the past and as I mentioned
earlier that NVMe devices have a flag indicating support. However, if a
device wants to be involved in P2P it must support it and we can exclude
devices that don't support it by simply not enabling their drivers.

Logan


Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-23 Thread Bjorn Helgaas
On Thu, Mar 22, 2018 at 10:57:32PM +, Stephen  Bates wrote:
> >  I've seen the response that peers directly below a Root Port could not
> > DMA to each other through the Root Port because of the "route to self"
> > issue, and I'm not disputing that.  
> 
> Bjorn 
> 
> You asked me for a reference to RTS in the PCIe specification. As
> luck would have it I ended up in an Irish bar with Peter Onufryk
> this week at OCP Summit. We discussed the topic. It is not
> explicitly referred to as "Route to Self" and it's certainly not
> explicit (or obvious) but r6.2.8.1 of the PCIe 4.0 specification
> discusses error conditions for virtual PCI bridges. One of these
> conditions (given in the very first bullet in that section) applies
> to a request that is destined for the same port it came in on. When
> this occurs the request must be terminated as a UR.

Thanks for that reference!

I suspect figure 10-3 in sec 10.1.1 might also be relevant, although
it's buried in the ATS section.  It shows internal routing between
functions of a multifunction device.  That suggests that the functions
should be able to DMA to each other without those transactions ever
appearing on the link.

Popping way up the stack, my original point was that I'm trying to
remove restrictions on what devices can participate in peer-to-peer
DMA.  I think it's fairly clear that in conventional PCI, any devices
in the same PCI hierarchy, i.e., below the same host-to-PCI bridge,
should be able to DMA to each other.

The routing behavior of PCIe is supposed to be compatible with
conventional PCI, and I would argue that this effectively requires
multi-function PCIe devices to have the internal routing required to
avoid the route-to-self issue.

Bjorn


Multi-Actuator SAS HDD First Look

2018-03-23 Thread Tim Walker
Seagate announced their split actuator SAS drive, which will probably
require some kernel changes for full support. It's targeted at cloud
provider JBODs and RAID.

Here are some of the drive's architectural points. Since the two LUNs
share many common components (e.g. spindle) Seagate allocated some
SCSI operations to be LUN specific and some to affect the entire
device, that is, both LUNs.

1. Two LUNs, 0 & 1, each with independent lba space, and each
connected to an independent read channel, actuator, and set of heads.
2. Each actuator addresses 1/2 of the media - no media is shared
across the actuators. They seek independently.
3. One World Wide Name (WWN) is assigned to the port for device
address. Each Logical Unit has a separate World Wide Name for
identification in VPD page.
4. 128 deep command queue, shared across both LUNs
5. Each LUN can pull commands from the queue independently, so they
can implement their own sorting and optimization.
6. Ordered tag attribute causes the command to be ordered across both
Logical Units
7. Head of Queue attribute causes the command to be ordered with
respect to a single Logical Unit
8. Mode pages are device-based (shared across both Logical Units)
9. Log pages are device-based.
10. Inquiry VPD pages (with minor exceptions) are device based.
11. Device health features (SMART, etc) are device based

Seagate wants the multi-actuator design to integrate into the stack as
painlessly as possible.The interface design is still in the early
stages, so I am gathering requirements and recommendations, and also
providing any information necessary to help scope integrating a
multi-LUN device into the MQ stack. So, I am soliciting any pertinent
feedback including:

1. Painful incompatibilities between the Seagate proposal and current
MQ architecture
2. Linux changes needed
3. Drive interface changes needed
4. Anything else I may have overlooked

Please feel free to send any questions or comments.

Tim Walker
Product Design Systems Engineering, Seagate Technology
(303) 775-3770


Re: problem with bio handling on raid5 and pblk

2018-03-23 Thread Javier González
> On 22 Mar 2018, at 18.00, Matias Bjørling  wrote:
> 
> On 03/22/2018 03:34 PM, Javier González wrote:
>> Hi,
>> I have been looking into a bug report when using pblk and raid5 on top
>> and I am having problems understanding if the problem is in pblk's bio
>> handling or on raid5's bio assumptions on the completion path.
>> The problem occurs on the read path. In pblk, we take a reference to
>> every read bio as it enters, and release it after completing the bio.
>>generic_make_request()
>>pblk_submit_read()
>>  bio_get()
>>  ...
>>  bio_endio()
>>  bio_put()
>> The problem seems to be that on raid5's bi_end_io completion path,
>> raid5_end_read_request(), bio_reset() is called. When put together
>> with pblk's bio handling:
>>generic_make_request()
>>pblk_submit_read()
>>  bio_get()
>>  ...
>>  bio_endio()
>>  raid5_end_read_request()
>>bio_reset()
>>  bio_put()
>> it results in the newly reset bio being put immediately, thus freed.
>> When the bio is reused then, we have an invalid pointer. In the report
>> we received things crash at BUG_ON(bio->bi_next) at
>> generic_make_request().
>> As far as I understand, it is part of the bio normal operation for
>> drivers under generic_make_request() to be able to take references and
>> release them after bio completion. Thus, in this case, the assumption
>> made by raid5, that it can issue a bio_reset() is incorrect. But I might
>> be missing an implicit cross layer rule that we are violating in pblk.
>> Any ideas?
>> This said, after analyzing the problem from pblk's perspective, I see
>> not reason to use bio_get()/bio_put() in the read path as it is at the
>> pblk level that we are submitting bio_endio(), thus we cannot risk the
>> bio being freed underneath us. Is this reasoning correct? I remember I
>> introduced these at the time there was a bug on the aio path, which was
>> not cleaning up correctly and could trigger an early bio free, but
>> revisiting it now, it seems unnecessary.
>> Thanks for the help!
>> Javier
> 
> I think I sent a longer e-mail to you and Huaicheng about this a while back.

I don't think I was in that email.

There are two parts to the question. One is raid5's bio completion
assumptions and the other is whether we can avoid bio_get()/put() in
pblk's read path. The first part is pblk independent and I would like to
leave it open as I would like to understand how bio_reset() in this
context is correct. Right now, I cannot see how this is correct
behaviour.

For the pblk specific part, see below.

> The problem is that the pblk encapsulates the bio in its own request.
> So the bio's are freed before the struct request completion is done
> (as you identify). If you can make the completion path (as bio's are
> completed before the struct request completion fn is called) to not
> use the bio, then the bio_get/put code can be removed.
> 
> If it needs the bio on the completion path (e.g., for partial reads,
> and if needed in the struct request completion path), one should clone
> the bio, submit, and complete the original bio afterwards.

I don't follow how the relationship with struct request completion is
different with bio_get/put and without.

The flow in terms of bio and struct request management is today:

  generic_make_request()
   pblk_submit_read()
 bio_get()
  ...
 blk_init_request_from_bio()
 blk_execute_rq_nowait() / blk_execute_rq() // denepnding on sync/async
  ...
 bio_endio()
 bio_put()
  ...
 blk_mq_free_request()

bios risk to always freed in any case, as bio_put() will the last pblk
reference. The only case in which this will not happen is that somebody
else took a bio_get() on the way down. But we cannot assume anything.

I guess the problem I am having understanding this is how we can risk
the bio disappearing underneath when we are the ones completing the bio.
As I understand it, in this case we are always guaranteed that the
bio is alive due to the allocation reference. Therefore, bio_get()/put()
is not needed. Am I missing anything?

Thanks,
Javier



signature.asc
Description: Message signed with OpenPGP


Re: [PATCH 4/4] nvme: lightnvm: add late setup of block size and metadata

2018-03-23 Thread Matias Bjørling

On 02/05/2018 01:15 PM, Matias Bjørling wrote:

The nvme driver sets up the size of the nvme namespace in two steps.
First it initializes the device with standard logical block and
metadata sizes, and then sets the correct logical block and metadata
size. Due to the OCSSD 2.0 specification relies on the namespace to
expose these sizes for correct initialization, let it be updated
appropriately on the LightNVM side as well.

Signed-off-by: Matias Bjørling 
---
  drivers/nvme/host/core.c | 2 ++
  drivers/nvme/host/lightnvm.c | 8 
  drivers/nvme/host/nvme.h | 2 ++
  3 files changed, 12 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index f837d666cbd4..740ceb28067c 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1379,6 +1379,8 @@ static void __nvme_revalidate_disk(struct gendisk *disk, 
struct nvme_id_ns *id)
if (ns->noiob)
nvme_set_chunk_size(ns);
nvme_update_disk_info(disk, ns, id);
+   if (ns->ndev)
+   nvme_nvm_update_nvm_info(ns);
  #ifdef CONFIG_NVME_MULTIPATH
if (ns->head->disk)
nvme_update_disk_info(ns->head->disk, ns, id);
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index a9c010655ccc..8d4301854811 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -814,6 +814,14 @@ int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, 
unsigned long arg)
}
  }
  
+void nvme_nvm_update_nvm_info(struct nvme_ns *ns)

+{
+   struct nvm_dev *ndev = ns->ndev;
+
+   ndev->identity.csecs = ndev->geo.sec_size = 1 << ns->lba_shift;
+   ndev->identity.sos = ndev->geo.oob_size = ns->ms;
+}
+
  int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node)
  {
struct request_queue *q = ns->queue;
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index ea1aa5283e8e..1ca08f4993ba 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -451,12 +451,14 @@ static inline void nvme_mpath_clear_current_path(struct 
nvme_ns *ns)
  #endif /* CONFIG_NVME_MULTIPATH */
  
  #ifdef CONFIG_NVM

+void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
  int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
  void nvme_nvm_unregister(struct nvme_ns *ns);
  int nvme_nvm_register_sysfs(struct nvme_ns *ns);
  void nvme_nvm_unregister_sysfs(struct nvme_ns *ns);
  int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
  #else
+static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
  static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
int node)
  {



Hi Keith,

When going through the patches for 4.17, I forgot to run this patch by 
you. It is part of adding OCSSD2.0 support to the kernel, and slides in 
between a large refactoring, and the 2.0 part. May I add your reviewed 
by and let Jens pick it up after the nvme patches for 4.17 has gone up?


Thanks!

-Matias


[PATCH] lightnvm: remove function name in strings

2018-03-23 Thread Matias Bjørling
For the sysfs functions, the function names are embedded into their
error strings. If the function name later changes, the string may
not be updated accordingly. Update the strings to use __func__
to avoid this.

Signed-off-by: Matias Bjørling 
---
 drivers/nvme/host/lightnvm.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index ffd64a83c8c3..41279da799ed 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -1028,8 +1028,8 @@ static ssize_t nvm_dev_attr_show(struct device *dev,
} else {
return scnprintf(page,
 PAGE_SIZE,
-"Unhandled attr(%s) in `nvm_dev_attr_show`\n",
-attr->name);
+"Unhandled attr(%s) in `%s`\n",
+attr->name, __func__);
}
 }
 
@@ -1103,8 +1103,8 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
return scnprintf(page, PAGE_SIZE, "%u\n", NVM_MAX_VLBA);
} else {
return scnprintf(page, PAGE_SIZE,
-   "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n",
-   attr->name);
+   "Unhandled attr(%s) in `%s`\n",
+   attr->name, __func__);
}
 }
 
@@ -1149,8 +1149,8 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev,
return scnprintf(page, PAGE_SIZE, "%u\n", geo->tbem);
} else {
return scnprintf(page, PAGE_SIZE,
-   "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n",
-   attr->name);
+   "Unhandled attr(%s) in `%s`\n",
+   attr->name, __func__);
}
 }
 
-- 
2.11.0