[PATCH v3] ndctl list should show more hardware information

2017-07-11 Thread Yasunori Goto
Hi, 

I made the v3 of output hardware id .
If this patch is ok, please apply.

--
Change log

v3 :  - rebased on human readable option patch.


v2 :  - use json_object_new_int() for each data.
  - check MAX value for each data.
  - phys_id becomes unsigned short from signed short.

---
   The current "id" data of dimm (ndctl list -D) shows the information
   of DIMM module vendor or serial number.  However, I think it is not enough. 
 
   If a NVDIMM becomes broken, users need to know its physical place
   rather than vendor or serial number to replace the NVDIMM.

   There are 2 candidate of such information. 
 a) NFIT Device Handle (_ADR of the NVDIMM device)
This date includes node controller id and socket id.
 b) NVDIMM Physical ID (Handle for the SMBIOS Memory Device (Type 17)).
The dmidecode can show this handle with more information like "Locator"
So, user can find the location of NVDIMM.

   At first, I think _ADR is enough information. 

   However, when I talked with our firmware team about this requirement, 
   they said it may not be good data, because the node contoller ID is not
   stable on our server due to "partitioning" feature.
   (Our server can have plural partition on one box, and may have plural
node 0.)

   So, I make the ndctl shows not only NFIT Device Handle but also
   NVDIMM Physical ID. Then, user can find the location with dmidecode.


Signed-off-by: Yasunori Goto 
---

diff --git a/ndctl/list.c b/ndctl/list.c
index d81d646..7f8db66 100644
--- a/ndctl/list.c
+++ b/ndctl/list.c
@@ -388,7 +388,7 @@ int cmd_list(int argc, const char **argv, void *ctx)
json_object_object_add(jbus, "dimms", 
jdimms);
}
 
-   jdimm = util_dimm_to_json(dimm);
+   jdimm = util_dimm_to_json(dimm, listopts_to_flags());
if (!jdimm) {
fail("\n");
continue;
diff --git a/util/json.c b/util/json.c
index 0878979..80512bd 100644
--- a/util/json.c
+++ b/util/json.c
@@ -148,10 +148,13 @@ struct json_object *util_bus_to_json(struct ndctl_bus 
*bus)
return NULL;
 }
 
-struct json_object *util_dimm_to_json(struct ndctl_dimm *dimm)
+struct json_object *util_dimm_to_json(struct ndctl_dimm *dimm,
+   unsigned long flags)
 {
struct json_object *jdimm = json_object_new_object();
const char *id = ndctl_dimm_get_unique_id(dimm);
+   unsigned int handle = ndctl_dimm_get_handle(dimm);
+   unsigned short phys_id = ndctl_dimm_get_phys_id(dimm);
struct json_object *jobj;
 
if (!jdimm)
@@ -169,6 +172,20 @@ struct json_object *util_dimm_to_json(struct ndctl_dimm 
*dimm)
json_object_object_add(jdimm, "id", jobj);
}
 
+   if (handle < UINT_MAX) {
+   jobj = util_json_object_hex(handle, flags);
+   if (!jobj)
+   goto err;
+   json_object_object_add(jdimm, "handle", jobj);
+   }
+
+   if (phys_id < USHRT_MAX) {
+   jobj = util_json_object_hex(phys_id, flags);
+   if (!jobj)
+   goto err;
+   json_object_object_add(jdimm, "phys_id", jobj);
+   }
+
if (!ndctl_dimm_is_enabled(dimm)) {
jobj = json_object_new_string("disabled");
if (!jobj)
diff --git a/util/json.h b/util/json.h
index ec394b6..d885ead 100644
--- a/util/json.h
+++ b/util/json.h
@@ -26,7 +26,8 @@ enum util_json_flags {
 struct json_object;
 void util_display_json_array(FILE *f_out, struct json_object *jarray, int 
jflag);
 struct json_object *util_bus_to_json(struct ndctl_bus *bus);
-struct json_object *util_dimm_to_json(struct ndctl_dimm *dimm);
+struct json_object *util_dimm_to_json(struct ndctl_dimm *dimm,
+   unsigned long flags);
 struct json_object *util_mapping_to_json(struct ndctl_mapping *mapping,
unsigned long flags);
 struct json_object *util_namespace_to_json(struct ndctl_namespace *ndns,

---
Yasunori Goto


___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: Enabling peer to peer device transactions for PCIe devices

2017-07-11 Thread lee fei
hello everyone

I’m trying to enable GPU ( Nvidia's GPU ) P2P in a KVM VM. The two GPUs are 
under a PCIe switch and it could do P2P in the host. But  it could not do GPU 
P2P even it could pass the [09/22] PCI: Add pci_peer_traffic_supported() 
(https://patchwork.kernel.org/patch/7188701/) function detection in
[09/22] PCI: Add pci_peer_traffic_supported() - 
Patchwork
patchwork.kernel.org
Add checks for topology and ACS configuration to determine whether or not peer 
traffic should be supported between two PCI devices. Signed-off-by: Will Davis



 A P2P DMA DMA-API/PCI map_peer_resource support for peer-to-peer
http://www.spinics.net/lists/linux-pci/msg44560.html

[PATCH 00/22] DMA-API/PCI map_peer_resource support for 
...
www.spinics.net
Linux PCI Bus: [PATCH 00/22] DMA-API/PCI map_peer_resource support for 
peer-to-peer


So I’m thinking would this thread take this situation ( virtualization scenario 
) into consideration? How could we judge whether it could support P2P in a KVM 
( or XEN etc ) VM?

Thanks!
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


TO��ȫ������ְ���й�����5��������

2017-07-11 Thread ʯ����
 Original mail message -
发件人:卜卫明
收件人:
发送时间:1981-07-10 

MIME-Version: 1.0

https://lists.01.org/mailman/listinfo/linux-nvdimm


TO��ȫ�桢ϵͳ��ѧϰ�� ����ȫ����

2017-07-11 Thread ������
 Original mail message -
发件人:于辰
收件人:
发送时间:1972-01-24

MIME-Version: 1.0

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [RFC 1/4] libnvdimm: add to_{nvdimm,nd_region}_dev()

2017-07-11 Thread Dan Williams
On Mon, Jul 10, 2017 at 9:38 PM, Oliver  wrote:
> On Tue, Jul 11, 2017 at 9:53 AM, Dan Williams  
> wrote:
>> On Tue, Jun 27, 2017 at 3:28 AM, Oliver O'Halloran  wrote:
>>> struct device contains the ->of_node pointer so that devices can be
>>> assoicated with the device-tree node that created them on DT platforms.
>>> libnvdimm hides the struct device for regions and nvdimm devices inside
>>> of an opaque structure so this patch adds accessors for each to allow
>>> the of_nvdimm driver to set the of_node pointer.
>>
>> I'd rather go the other way and pass in the of_node to the bus and
>> dimm registration routines. It's a generic property of the device so
>> we should handle it like other generic device properties that get set
>> at initialization time like 'attr_groups' in nvdimm_bus_descriptor, or
>> a new parameter to nvdimm_create().
>
> Sure. I just figured it would be preferable to keep firmware specific
> details inside the firmware driver rather than adding #ifdef CONFIG_OF
> around the place. Do you have any objections to making nvdimm_create()
> take a descriptor structure rather than adding a parameter?

I don't see why we need "#ifdef CONFIG_OF". It's just a "struct
of_node *" pointer that can be forward declared as "struct of_node;"
we don't need the full definition.

Yes, I'm fine with converting nvdimm_create() to a take a descriptor,
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm