Re: [Xen-devel] [PATCH v2 01/19] xen: dump vNUMA information with debug key u

2014-12-09 Thread Wei Liu
On Mon, Dec 08, 2014 at 05:01:29PM +, Jan Beulich wrote:
[...]
  +for ( i = 0; i  vnuma-nr_vnodes; i++ )
  +{
  +err = snprintf(keyhandler_scratch, 12, %3u,
  +vnuma-vnode_to_pnode[i]);
  +if ( err  0 || vnuma-vnode_to_pnode[i] == NUMA_NO_NODE )
  +strlcpy(keyhandler_scratch, ???, 3);
  +
  +printk(   vnode  %3u - pnode %s\n, i, 
  keyhandler_scratch);
  +for ( j = 0; j  vnuma-nr_vmemranges; j++ )
  +{
  +if ( vnuma-vmemrange[j].nid == i )
  +{
  +mem = vnuma-vmemrange[j].end - 
  vnuma-vmemrange[j].start;
  +printk(%16PRIu64 MB: %#016PRIx64 - 
  %#016PRIx64\n,
 
 Am I misremembering that these were just 0x%PRIx64 originally?

Yes.

 I ask because converting to the 0-padded fixed width form makes
 no sense together with the # modifier. For these ranges I think it's

OK.

 quite obvious that the numbers are hex, so I'd suggest dropping
 the #s without replacement. And to be honest I'm also against
 printing duplicate information: The memory range already specifies
 how much memory this is.
 

Is this what you want?

+if ( vnuma-vmemrange[j].nid == i )
+{
+printk( %016PRIx64 - %016PRIx64\n,
+   vnuma-vmemrange[j].start,
+   vnuma-vmemrange[j].end);
+}

And it prints out something like:

(XEN)  2 vnodes, 2 vcpus:
(XEN)vnode0 - pnode   0
(XEN)   - bb80
(XEN)vcpus:   0

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 01/19] xen: dump vNUMA information with debug key u

2014-12-09 Thread Jan Beulich
 On 09.12.14 at 12:22, wei.l...@citrix.com wrote:
 Is this what you want?
 
 +if ( vnuma-vmemrange[j].nid == i )
 +{
 +printk( %016PRIx64 - %016PRIx64\n,
 +   vnuma-vmemrange[j].start,
 +   vnuma-vmemrange[j].end);
 +}
 
 And it prints out something like:
 
 (XEN)  2 vnodes, 2 vcpus:
 (XEN)vnode0 - pnode   0
 (XEN)   - bb80
 (XEN)vcpus:   0

This looks fine, yes.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 01/19] xen: dump vNUMA information with debug key u

2014-12-08 Thread Jan Beulich
 On 01.12.14 at 16:33, wei.l...@citrix.com wrote:
 --- a/xen/arch/x86/numa.c
 +++ b/xen/arch/x86/numa.c
 @@ -363,10 +363,13 @@ EXPORT_SYMBOL(node_data);
  static void dump_numa(unsigned char key)
  {
  s_time_t now = NOW();
 -int i;
 +unsigned int i, j, n;
 +int err;
  struct domain *d;
  struct page_info *page;
  unsigned int page_num_node[MAX_NUMNODES];
 +uint64_t mem;
 +struct vnuma_info *vnuma;

If this can be const, it should be in a pure dumping function.

 @@ -408,6 +411,48 @@ static void dump_numa(unsigned char key)
  
  for_each_online_node ( i )
  printk(Node %u: %u\n, i, page_num_node[i]);
 +
 +if ( !d-vnuma )
 +continue;
 +
 +vnuma = d-vnuma;
 +printk( %u vnodes, %u vcpus:\n, vnuma-nr_vnodes, 
 d-max_vcpus);
 +for ( i = 0; i  vnuma-nr_vnodes; i++ )
 +{
 +err = snprintf(keyhandler_scratch, 12, %3u,
 +vnuma-vnode_to_pnode[i]);
 +if ( err  0 || vnuma-vnode_to_pnode[i] == NUMA_NO_NODE )
 +strlcpy(keyhandler_scratch, ???, 3);
 +
 +printk(   vnode  %3u - pnode %s\n, i, keyhandler_scratch);
 +for ( j = 0; j  vnuma-nr_vmemranges; j++ )
 +{
 +if ( vnuma-vmemrange[j].nid == i )
 +{
 +mem = vnuma-vmemrange[j].end - 
 vnuma-vmemrange[j].start;
 +printk(%16PRIu64 MB: %#016PRIx64 - %#016PRIx64\n,

Am I misremembering that these were just 0x%PRIx64 originally?
I ask because converting to the 0-padded fixed width form makes
no sense together with the # modifier. For these ranges I think it's
quite obvious that the numbers are hex, so I'd suggest dropping
the #s without replacement. And to be honest I'm also against
printing duplicate information: The memory range already specifies
how much memory this is.

 +   mem  20,
 +   vnuma-vmemrange[j].start,
 +   vnuma-vmemrange[j].end);
 +}
 +}
 +
 +printk(   vcpus: );
 +for ( j = 0, n = 0; j  d-max_vcpus; j++ )
 +{
 +if ( vnuma-vcpu_to_vnode[j] == i )
 +{
 +if ( (n + 1) % 8 == 0 )
 +printk(%3d\n, j);
 +else if ( !(n % 8)  n != 0 )
 +printk(%17d , j);
 +else
 +printk(%3d , j);
 +n++;
 +}

Please consider very-many-vCPU guests here - see Andrew's commit
9cf71226 (process softirqs while dumping domains).

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel