Re: [Xen-devel] [PATCH] tools: Fix wild memory allocations from c/s 250f0b4 and 85d78b4

2015-05-21 Thread Ian Campbell
On Mon, 2015-05-18 at 14:03 +0100, Wei Liu wrote:
 On Mon, May 18, 2015 at 01:57:24PM +0100, Andrew Cooper wrote:
  These changesets cause the respective libxc functions to unconditonally
  dereference their max_cpus/nodes parameters as part of initial memory
  allocations.  It will fail at obtaining the correct number of cpus/nodes 
  from
  Xen, as the guest handles will not be NULL.
  
  Signed-off-by: Andrew Cooper andrew.coop...@citrix.com
  CC: Ian Campbell ian.campb...@citrix.com
  CC: Ian Jackson ian.jack...@eu.citrix.com
  CC: Wei Liu wei.l...@citrix.com
  CC: Boris Ostrovsky boris.ostrov...@oracle.com
  
 
 Acked-by: Wei Liu wei.l...@citrix.com

Applied, thanks.



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] tools: Fix wild memory allocations from c/s 250f0b4 and 85d78b4

2015-05-18 Thread Andrew Cooper
On 18/05/15 15:34, Boris Ostrovsky wrote:
 On 05/18/2015 10:09 AM, Andrew Cooper wrote:
 On 18/05/15 15:00, Boris Ostrovsky wrote:
 On 05/18/2015 08:57 AM, Andrew Cooper wrote:
 These changesets cause the respective libxc functions to
 unconditonally
 dereference their max_cpus/nodes parameters as part of initial memory
 allocations.  It will fail at obtaining the correct number of
 cpus/nodes from
 Xen, as the guest handles will not be NULL.

 Signed-off-by: Andrew Cooper andrew.coop...@citrix.com
 CC: Ian Campbell ian.campb...@citrix.com
 CC: Ian Jackson ian.jack...@eu.citrix.com
 CC: Wei Liu wei.l...@citrix.com
 CC: Boris Ostrovsky boris.ostrov...@oracle.com

 ---
 Spotted by XenServers Coverity run.
 ---
tools/libxl/libxl.c   |4 ++--
tools/misc/xenpm.c|4 ++--
tools/python/xen/lowlevel/xc/xc.c |4 ++--
3 files changed, 6 insertions(+), 6 deletions(-)
 xenpm bug is already fixed (commit
 b315cd9cce5b6da7ca89b2d7bad3fb01e7716044 n the staging tree).

 I am not sure I understand why Coverity complains about other spots.
 For example, in libxl_get_cpu_topology() num_cpus can be left
 uninitialized only if xc_cputopoinfo(ctx-xch, num_cpus, NULL) fails,
 in which case we go to 'GC_FREE;  return ret;', so it's not ever used.
 xc_cputopoinfo(ctx-xch, num_cpus, NULL) unconditionally dereferences
 and reads num_cpus, and performs a memory allocation based on the
 result.

 Ah, OK. xc_cputopoinf() (or, rather, the hypervisor) actually doesn't
 use the value of dereferenced num_cpus in this case but obviously
 Coverity can't know about this.

 So Coverity cross-checks routines to see how callers use the arguments?

xc_cputopoinfo(ctx-xch, num_cpus, NULL) dereferences num_cpus as part
of its DECLARE_HYPERCALL_BUFFER()s.  All of this happens before getting
anywhere near the hypervisor.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] tools: Fix wild memory allocations from c/s 250f0b4 and 85d78b4

2015-05-18 Thread Boris Ostrovsky

On 05/18/2015 10:09 AM, Andrew Cooper wrote:

On 18/05/15 15:00, Boris Ostrovsky wrote:

On 05/18/2015 08:57 AM, Andrew Cooper wrote:

These changesets cause the respective libxc functions to unconditonally
dereference their max_cpus/nodes parameters as part of initial memory
allocations.  It will fail at obtaining the correct number of
cpus/nodes from
Xen, as the guest handles will not be NULL.

Signed-off-by: Andrew Cooper andrew.coop...@citrix.com
CC: Ian Campbell ian.campb...@citrix.com
CC: Ian Jackson ian.jack...@eu.citrix.com
CC: Wei Liu wei.l...@citrix.com
CC: Boris Ostrovsky boris.ostrov...@oracle.com

---
Spotted by XenServers Coverity run.
---
   tools/libxl/libxl.c   |4 ++--
   tools/misc/xenpm.c|4 ++--
   tools/python/xen/lowlevel/xc/xc.c |4 ++--
   3 files changed, 6 insertions(+), 6 deletions(-)

xenpm bug is already fixed (commit
b315cd9cce5b6da7ca89b2d7bad3fb01e7716044 n the staging tree).

I am not sure I understand why Coverity complains about other spots.
For example, in libxl_get_cpu_topology() num_cpus can be left
uninitialized only if xc_cputopoinfo(ctx-xch, num_cpus, NULL) fails,
in which case we go to 'GC_FREE;  return ret;', so it's not ever used.

xc_cputopoinfo(ctx-xch, num_cpus, NULL) unconditionally dereferences
and reads num_cpus, and performs a memory allocation based on the result.


Ah, OK. xc_cputopoinf() (or, rather, the hypervisor) actually doesn't 
use the value of dereferenced num_cpus in this case but obviously 
Coverity can't know about this.


So Coverity cross-checks routines to see how callers use the arguments?

-boris



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] tools: Fix wild memory allocations from c/s 250f0b4 and 85d78b4

2015-05-18 Thread Andrew Cooper
On 18/05/15 15:00, Boris Ostrovsky wrote:
 On 05/18/2015 08:57 AM, Andrew Cooper wrote:
 These changesets cause the respective libxc functions to unconditonally
 dereference their max_cpus/nodes parameters as part of initial memory
 allocations.  It will fail at obtaining the correct number of
 cpus/nodes from
 Xen, as the guest handles will not be NULL.

 Signed-off-by: Andrew Cooper andrew.coop...@citrix.com
 CC: Ian Campbell ian.campb...@citrix.com
 CC: Ian Jackson ian.jack...@eu.citrix.com
 CC: Wei Liu wei.l...@citrix.com
 CC: Boris Ostrovsky boris.ostrov...@oracle.com

 ---
 Spotted by XenServers Coverity run.
 ---
   tools/libxl/libxl.c   |4 ++--
   tools/misc/xenpm.c|4 ++--
   tools/python/xen/lowlevel/xc/xc.c |4 ++--
   3 files changed, 6 insertions(+), 6 deletions(-)

 xenpm bug is already fixed (commit
 b315cd9cce5b6da7ca89b2d7bad3fb01e7716044 n the staging tree).

 I am not sure I understand why Coverity complains about other spots.
 For example, in libxl_get_cpu_topology() num_cpus can be left
 uninitialized only if xc_cputopoinfo(ctx-xch, num_cpus, NULL) fails,
 in which case we go to 'GC_FREE;  return ret;', so it's not ever used.

xc_cputopoinfo(ctx-xch, num_cpus, NULL) unconditionally dereferences
and reads num_cpus, and performs a memory allocation based on the result.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] tools: Fix wild memory allocations from c/s 250f0b4 and 85d78b4

2015-05-18 Thread Wei Liu
On Mon, May 18, 2015 at 01:57:24PM +0100, Andrew Cooper wrote:
 These changesets cause the respective libxc functions to unconditonally
 dereference their max_cpus/nodes parameters as part of initial memory
 allocations.  It will fail at obtaining the correct number of cpus/nodes from
 Xen, as the guest handles will not be NULL.
 
 Signed-off-by: Andrew Cooper andrew.coop...@citrix.com
 CC: Ian Campbell ian.campb...@citrix.com
 CC: Ian Jackson ian.jack...@eu.citrix.com
 CC: Wei Liu wei.l...@citrix.com
 CC: Boris Ostrovsky boris.ostrov...@oracle.com
 

Acked-by: Wei Liu wei.l...@citrix.com

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] tools: Fix wild memory allocations from c/s 250f0b4 and 85d78b4

2015-05-18 Thread Boris Ostrovsky

On 05/18/2015 08:57 AM, Andrew Cooper wrote:

These changesets cause the respective libxc functions to unconditonally
dereference their max_cpus/nodes parameters as part of initial memory
allocations.  It will fail at obtaining the correct number of cpus/nodes from
Xen, as the guest handles will not be NULL.

Signed-off-by: Andrew Cooper andrew.coop...@citrix.com
CC: Ian Campbell ian.campb...@citrix.com
CC: Ian Jackson ian.jack...@eu.citrix.com
CC: Wei Liu wei.l...@citrix.com
CC: Boris Ostrovsky boris.ostrov...@oracle.com

---
Spotted by XenServers Coverity run.
---
  tools/libxl/libxl.c   |4 ++--
  tools/misc/xenpm.c|4 ++--
  tools/python/xen/lowlevel/xc/xc.c |4 ++--
  3 files changed, 6 insertions(+), 6 deletions(-)


xenpm bug is already fixed (commit 
b315cd9cce5b6da7ca89b2d7bad3fb01e7716044 n the staging tree).


I am not sure I understand why Coverity complains about other spots. For 
example, in libxl_get_cpu_topology() num_cpus can be left uninitialized 
only if xc_cputopoinfo(ctx-xch, num_cpus, NULL) fails, in which case 
we go to 'GC_FREE;  return ret;', so it's not ever used.



-boris




diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index a6eb2df..295877b 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -5105,7 +5105,7 @@ libxl_cputopology *libxl_get_cpu_topology(libxl_ctx *ctx, 
int *nb_cpu_out)
  xc_cputopo_t *cputopo;
  libxl_cputopology *ret = NULL;
  int i;
-unsigned num_cpus;
+unsigned num_cpus = 0;
  
  /* Setting buffer to NULL makes the call return number of CPUs */

  if (xc_cputopoinfo(ctx-xch, num_cpus, NULL))
@@ -5191,7 +5191,7 @@ libxl_numainfo *libxl_get_numainfo(libxl_ctx *ctx, int 
*nr)
  uint32_t *distance;
  libxl_numainfo *ret = NULL;
  int i, j;
-unsigned num_nodes;
+unsigned num_nodes = 0;
  
  if (xc_numainfo(ctx-xch, num_nodes, NULL, NULL)) {

  LOGE(ERROR, Unable to determine number of nodes);
diff --git a/tools/misc/xenpm.c b/tools/misc/xenpm.c
index fe2c001..2f9bd8e 100644
--- a/tools/misc/xenpm.c
+++ b/tools/misc/xenpm.c
@@ -356,7 +356,7 @@ static void signal_int_handler(int signo)
  struct timeval tv;
  int cx_cap = 0, px_cap = 0;
  xc_cputopo_t *cputopo = NULL;
-unsigned max_cpus;
+unsigned max_cpus = 0;
  
  if ( xc_cputopoinfo(xc_handle, max_cpus, NULL) != 0 )

  {
@@ -961,7 +961,7 @@ void scaling_governor_func(int argc, char *argv[])
  void cpu_topology_func(int argc, char *argv[])
  {
  xc_cputopo_t *cputopo = NULL;
-unsigned max_cpus;
+unsigned max_cpus = 0;
  int i, rc;
  
  if ( xc_cputopoinfo(xc_handle, max_cpus, NULL) != 0 )

diff --git a/tools/python/xen/lowlevel/xc/xc.c 
b/tools/python/xen/lowlevel/xc/xc.c
index fbd93db..c77e15b 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1221,7 +1221,7 @@ static PyObject *pyxc_getcpuinfo(XcObject *self, PyObject 
*args, PyObject *kwds)
  static PyObject *pyxc_topologyinfo(XcObject *self)
  {
  xc_cputopo_t *cputopo = NULL;
-unsigned i, num_cpus;
+unsigned i, num_cpus = 0;
  PyObject *ret_obj = NULL;
  PyObject *cpu_to_core_obj, *cpu_to_socket_obj, *cpu_to_node_obj;
  
@@ -1293,7 +1293,7 @@ static PyObject *pyxc_topologyinfo(XcObject *self)
  
  static PyObject *pyxc_numainfo(XcObject *self)

  {
-unsigned i, j, num_nodes;
+unsigned i, j, num_nodes = 0;
  uint64_t free_heap;
  PyObject *ret_obj = NULL, *node_to_node_dist_list_obj;
  PyObject *node_to_memsize_obj, *node_to_memfree_obj;



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH] tools: Fix wild memory allocations from c/s 250f0b4 and 85d78b4

2015-05-18 Thread Andrew Cooper
These changesets cause the respective libxc functions to unconditonally
dereference their max_cpus/nodes parameters as part of initial memory
allocations.  It will fail at obtaining the correct number of cpus/nodes from
Xen, as the guest handles will not be NULL.

Signed-off-by: Andrew Cooper andrew.coop...@citrix.com
CC: Ian Campbell ian.campb...@citrix.com
CC: Ian Jackson ian.jack...@eu.citrix.com
CC: Wei Liu wei.l...@citrix.com
CC: Boris Ostrovsky boris.ostrov...@oracle.com

---
Spotted by XenServers Coverity run.
---
 tools/libxl/libxl.c   |4 ++--
 tools/misc/xenpm.c|4 ++--
 tools/python/xen/lowlevel/xc/xc.c |4 ++--
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index a6eb2df..295877b 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -5105,7 +5105,7 @@ libxl_cputopology *libxl_get_cpu_topology(libxl_ctx *ctx, 
int *nb_cpu_out)
 xc_cputopo_t *cputopo;
 libxl_cputopology *ret = NULL;
 int i;
-unsigned num_cpus;
+unsigned num_cpus = 0;
 
 /* Setting buffer to NULL makes the call return number of CPUs */
 if (xc_cputopoinfo(ctx-xch, num_cpus, NULL))
@@ -5191,7 +5191,7 @@ libxl_numainfo *libxl_get_numainfo(libxl_ctx *ctx, int 
*nr)
 uint32_t *distance;
 libxl_numainfo *ret = NULL;
 int i, j;
-unsigned num_nodes;
+unsigned num_nodes = 0;
 
 if (xc_numainfo(ctx-xch, num_nodes, NULL, NULL)) {
 LOGE(ERROR, Unable to determine number of nodes);
diff --git a/tools/misc/xenpm.c b/tools/misc/xenpm.c
index fe2c001..2f9bd8e 100644
--- a/tools/misc/xenpm.c
+++ b/tools/misc/xenpm.c
@@ -356,7 +356,7 @@ static void signal_int_handler(int signo)
 struct timeval tv;
 int cx_cap = 0, px_cap = 0;
 xc_cputopo_t *cputopo = NULL;
-unsigned max_cpus;
+unsigned max_cpus = 0;
 
 if ( xc_cputopoinfo(xc_handle, max_cpus, NULL) != 0 )
 {
@@ -961,7 +961,7 @@ void scaling_governor_func(int argc, char *argv[])
 void cpu_topology_func(int argc, char *argv[])
 {
 xc_cputopo_t *cputopo = NULL;
-unsigned max_cpus;
+unsigned max_cpus = 0;
 int i, rc;
 
 if ( xc_cputopoinfo(xc_handle, max_cpus, NULL) != 0 )
diff --git a/tools/python/xen/lowlevel/xc/xc.c 
b/tools/python/xen/lowlevel/xc/xc.c
index fbd93db..c77e15b 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1221,7 +1221,7 @@ static PyObject *pyxc_getcpuinfo(XcObject *self, PyObject 
*args, PyObject *kwds)
 static PyObject *pyxc_topologyinfo(XcObject *self)
 {
 xc_cputopo_t *cputopo = NULL;
-unsigned i, num_cpus;
+unsigned i, num_cpus = 0;
 PyObject *ret_obj = NULL;
 PyObject *cpu_to_core_obj, *cpu_to_socket_obj, *cpu_to_node_obj;
 
@@ -1293,7 +1293,7 @@ static PyObject *pyxc_topologyinfo(XcObject *self)
 
 static PyObject *pyxc_numainfo(XcObject *self)
 {
-unsigned i, j, num_nodes;
+unsigned i, j, num_nodes = 0;
 uint64_t free_heap;
 PyObject *ret_obj = NULL, *node_to_node_dist_list_obj;
 PyObject *node_to_memsize_obj, *node_to_memfree_obj;
-- 
1.7.10.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel