Re: [libvirt] storage lock issues

2017-11-02 Thread John Ferlan


On 11/02/2017 11:44 AM, Cedric Bosdonnat wrote:
> On Thu, 2017-11-02 at 07:03 -0400, John Ferlan wrote:
>> So how to fix - well seems to me the storageDriverLock in VolCreateXML
>> may be the way since the refresh thread takes the driver lock first,
>> then the pool object second.  Perhaps even like the build code where it
>> takes it temporarily while getting the pool object. I'd have to think a
>> bit more about though. Still might be something to try since the Vol
>> Refresh thread takes it while running...
> 
> This is surely a bad hack, but this is fixing the problem I'm seeing.
> Wouldn't the VolCreateXML function taking the lock for a too long time
> and thus lead to other troubles?
> 

This was my first gut reaction (that is the fix you chose), but I wanted
to take the time to make sure paths being called wouldn't have deadlocks
as well and of course to check history to ensure someone didn't remove
the lock for a specific reason.

You are correct it's a long time to hold that storageDriverLock, but the
alternative is the crash you found.  We hold that lock many other long
running storage pool operations.

BTW: A different solution could to follow how storageVolCreateXMLFrom
does things. It only uses the lock around storage pool obj operations.

While I'm thinking about it - Vol Delete perhaps should do some locking too.

Again once the Storage/Vol code gets the common object and uses RW Locks
- many of the existing DriverLock/Unlock won't be necessary because the
storage pool and storage volume lists would be self locking.


>  %< 
> diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c
> index 18d630319..f5a1e7bc2 100644
> --- a/src/storage/storage_driver.c
> +++ b/src/storage/storage_driver.c
> @@ -1825,6 +1825,7 @@ storageVolCreateXML(virStoragePoolPtr obj,
>  if (backend->createVol(obj->conn, pool, voldef) < 0)
>  goto cleanup;
>  
> +storageDriverLock();
>  pool->volumes.objs[pool->volumes.count++] = voldef;
>  volobj = virGetStorageVol(obj->conn, pool->def->name, voldef->name,
>voldef->key, NULL, NULL);
> @@ -1858,9 +1859,7 @@ storageVolCreateXML(virStoragePoolPtr obj,
>  
>  VIR_FREE(buildvoldef);
>  
> -storageDriverLock();
>  virStoragePoolObjLock(pool);
> -storageDriverUnlock();
>  
>  voldef->building = false;
>  pool->asyncjobs--;
> @@ -1897,6 +1896,7 @@ storageVolCreateXML(virStoragePoolPtr obj,
>  voldef = NULL;
>  
>   cleanup:
> +storageDriverUnlock();

If we took it before the PoolObj lock, then we'd need to Unlock after we
unlock the pool obj...

John

>  virObjectUnref(volobj);
>  virStorageVolDefFree(voldef);
>  if (pool)
>  %< 
> 
> --
> Cedric
> 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 00/12] qemu: Add support for more migration parameters

2017-11-02 Thread Jiri Denemark
On Thu, Nov 02, 2017 at 16:12:34 -0400, John Ferlan wrote:
> 
> 
> On 10/26/2017 06:03 PM, Jiri Denemark wrote:
> > QEMU is transforming existing special migration parameters (those which
> > need dedicated QMP commands to be set or queried) into proper parameters
> > handled by query-migrate-parameters and migrate-set-parameters. Even
> > though we may still want to use the existing commands adding support for
> > tha transformed parameters will help us clean all of them before
> > migration and reset them to their original values after a failed
> > migration. Thus we wouldn't need to add more ad-hoc code which resets
> > some of them and ignores some others.
> > 
> > Jiri Denemark (12):
> >   qemu: Generalize PARSE macro in qemuMonitorJSONGetMigrationParams
> >   qemu: Use macro for parsing string migration parameters
> >   qemu: Use macro for parsing ull migration parameters
> >   qemu: Generalize APPEND macro in qemuMonitorJSONSetMigrationParams
> >   qemu: Use macro for setting string migration parameters
> >   qemu: Drop giant if statement from qemuMonitorSetMigrationParams
> >   qemumonitorjsontest: Rename 1st CHECK macro in migration params test
> >   qemumonitorjsontest: Rename 2nd CHECK macro in migration params test
> >   qemu: Add support for setting downtime-limit migration parameter
> >   qemu: Rename TLS related migration parameters
> >   qemu: Add support for max-bandwidth migration parameter
> >   qemu: Add support for block-incremental migration parameter
> > 
> >  src/qemu/qemu_migration.c|  23 +-
> >  src/qemu/qemu_monitor.c  |  20 +++--
> >  src/qemu/qemu_monitor.h  |  10 -
> >  src/qemu/qemu_monitor_json.c | 104 
> > +++
> >  tests/qemumonitorjsontest.c  |  46 ---
> >  5 files changed, 123 insertions(+), 80 deletions(-)
> > 
> 
> As long as you add an update to docs/news.xml to describe the basics at
> least and the addition of parameters for downtime-limit, max-bandwidth,
> and block-incremental

Nothing to be added to news.xml really. These are all internal changes
with no visible effect to a user. This will change once we use all this
to properly reset all parameters before and after migration to make sure
we migrate with a clean environment. It will deserve a news item, but
we're not there yet.

> Reviewed-by: John Ferlan 

Thanks.

Jirka

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] Post-release version bump to 3.10.0

2017-11-02 Thread Jiri Denemark
Signed-off-by: Jiri Denemark 
---

Pushed.

 configure.ac  | 2 +-
 docs/news.xml | 8 
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/configure.ac b/configure.ac
index ebe3318e0..b2d991c3b 100644
--- a/configure.ac
+++ b/configure.ac
@@ -16,7 +16,7 @@ dnl You should have received a copy of the GNU Lesser General 
Public
 dnl License along with this library.  If not, see
 dnl .
 
-AC_INIT([libvirt], [3.9.0], [libvir-list@redhat.com], [], 
[https://libvirt.org])
+AC_INIT([libvirt], [3.10.0], [libvir-list@redhat.com], [], 
[https://libvirt.org])
 AC_CONFIG_SRCDIR([src/libvirt.c])
 AC_CONFIG_AUX_DIR([build-aux])
 AC_CONFIG_HEADERS([config.h])
diff --git a/docs/news.xml b/docs/news.xml
index c24cc3f99..ef855d895 100644
--- a/docs/news.xml
+++ b/docs/news.xml
@@ -33,6 +33,14 @@
  -->
 
 
+  
+
+
+
+
+
+
+  
   
 
   
-- 
2.14.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] storage lock issues

2017-11-02 Thread John Ferlan


On 11/02/2017 09:10 AM, Cedric Bosdonnat wrote:
> Hi John,
> 
> On Thu, 2017-11-02 at 07:03 -0400, John Ferlan wrote:
>>
>> On 11/02/2017 06:20 AM, Cedric Bosdonnat wrote:
>>> Hi all,
>>>
>>> I'm investigating a crash in the storage driver, and it seems the solution
>>> is not as obvious as I wanted it.
>>>
>>> Basically what happens is that libvirtd crashes due to the pool refresh 
>>> thread
>>> clearing the list of pool volumes while someone is adding a new volume. Here
>>> is the valgrind log I have for that:
>>>
>>> http://paste.opensuse.org/58733866
>>>
>>
>> What version of libvirt?  Your line numbers don't match up with current
>> HEAD.
> 
> It's a 3.8.0.
> 
>>> I tried adding a call to virStoragePoolObjLock() in the
>>> virStorageVolPoolRefreshThread(), since the storageVolCreateXML()
>>> appears to have it, but I'm getting a dead lock. Here are the waiting
>>> threads I got from gdb:
>>
>> virStoragePoolObjFindByName returns a locked pool obj, so not sure
>> where/why you added this...
> 
> Oh I overlooked that... then there is already a lock on it, the solution
> is elsewhere
> 
>>>
>>> http://paste.opensuse.org/45961070
>>>
>>> Any idea what would be a proper fix for that? My vision of the storage
>>> drivers locks is too limited it seems ;)
>>
>> From just a quick look...
>>
>> The virStorageVolPoolRefreshThread will take storageDriverLock, get a
>> locked pool object (virStoragePoolObjFindByName), clear out the pool,
>> add back volumes that it knows about, unlock the pool object, and call
>> storageDriverUnlock.
>>
>> If you were adding a new pool... You'd take storageDriverLock, then
>> eventually virStoragePoolObjAssignDef would be called and the pool's
>> object lock taken and unlocked, and returns a locked pool object which
>> gets later unlocked in cleanup, followd by a storageDriverUnlock.
>>
>> If you're adding a new volume object, you'd get a locked pool object via
>> virStoragePoolObjFromStoragePool, then if building, that lock gets
>> dropped after increasing the async job count and setting the building
>> flag, the volume is built, then the object lock retaken while
>> temporarily holding the storageDriverLock, the async job count gets
>> decremented and the building flag cleared, eventually we fall into
>> cleanup with unlocks the pool again.
> 
> Thanks for the details: this is really useful to globally understand how
> locks are working in the storage driver. Maybe worth turning it into a doc
> somewhere?
> 

You mean code isn't self documenting, shocking ;-) [tongue firmly
planted in cheek].

Perhaps when I complete the conversion for Storage Pool/Volume and do
more Domain fix-ups I can add something somewhere that "generally"
describes internal flow.  The goal I've had is to be fairly similar for
all drivers w/r/t object usage and code flow.  It's taking a long time
to get there though because it's a lot of "roto-tilling" of code.


John

FWIW: I saw your query before we hopped in the car for a 7 hour drive -
figured at least providing something before I left would be helpful ;-).

>> So how to fix - well seems to me the storageDriverLock in VolCreateXML
>> may be the way since the refresh thread takes the driver lock first,
>> then the pool object second.  Perhaps even like the build code where it
>> takes it temporarily while getting the pool object. I'd have to think a
>> bit more about though. Still might be something to try since the Vol
>> Refresh thread takes it while running...
> 
> Ok, I'll look into that direction.
> 
>> John
>>
>> Not related to this problem per se, but what may help even more is if I
>> can get the storage driver usage of a common object model patches
>> completely reviewed, but that's a different problem ;-)... I'll have to
>> go look and see if I may have fixed this there. The newer model uses
>> hash tables, RW locks, and reduces the storage driver hammer lock, but
>> this one condition may not have been addressed.
> 
> I can have a go at it and see if I can reproduce the crash with it. Not sure
> I'll be the one skilled enough for the full review though ;)
> 
> --
> Cedric
> 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 00/12] qemu: Add support for more migration parameters

2017-11-02 Thread John Ferlan


On 10/26/2017 06:03 PM, Jiri Denemark wrote:
> QEMU is transforming existing special migration parameters (those which
> need dedicated QMP commands to be set or queried) into proper parameters
> handled by query-migrate-parameters and migrate-set-parameters. Even
> though we may still want to use the existing commands adding support for
> tha transformed parameters will help us clean all of them before
> migration and reset them to their original values after a failed
> migration. Thus we wouldn't need to add more ad-hoc code which resets
> some of them and ignores some others.
> 
> Jiri Denemark (12):
>   qemu: Generalize PARSE macro in qemuMonitorJSONGetMigrationParams
>   qemu: Use macro for parsing string migration parameters
>   qemu: Use macro for parsing ull migration parameters
>   qemu: Generalize APPEND macro in qemuMonitorJSONSetMigrationParams
>   qemu: Use macro for setting string migration parameters
>   qemu: Drop giant if statement from qemuMonitorSetMigrationParams
>   qemumonitorjsontest: Rename 1st CHECK macro in migration params test
>   qemumonitorjsontest: Rename 2nd CHECK macro in migration params test
>   qemu: Add support for setting downtime-limit migration parameter
>   qemu: Rename TLS related migration parameters
>   qemu: Add support for max-bandwidth migration parameter
>   qemu: Add support for block-incremental migration parameter
> 
>  src/qemu/qemu_migration.c|  23 +-
>  src/qemu/qemu_monitor.c  |  20 +++--
>  src/qemu/qemu_monitor.h  |  10 -
>  src/qemu/qemu_monitor_json.c | 104 
> +++
>  tests/qemumonitorjsontest.c  |  46 ---
>  5 files changed, 123 insertions(+), 80 deletions(-)
> 

As long as you add an update to docs/news.xml to describe the basics at
least and the addition of parameters for downtime-limit, max-bandwidth,
and block-incremental

Reviewed-by: John Ferlan 
(series)

John

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] cpu_map: Add cqm alternative name for cmt

2017-11-02 Thread John Ferlan


On 11/02/2017 03:29 PM, Jiri Denemark wrote:
> Linux kernel shows our "cmt" feature as "cqm". Let's mention the name in
> the cpu_map.xml to make it easier to find.
> 
> Signed-off-by: Jiri Denemark 
> ---
>  src/cpu/cpu_map.xml | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 

Trivial... Reviewed-by: John Ferlan 

John

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] conf: Don't inline virDomainNetTypeSharesHostView

2017-11-02 Thread Jiri Denemark
When coverage build is enabled, gcc complains about it:

In file included from qemu/qemu_agent.h:29:0,
 from qemu/qemu_driver.c:47:
qemu/qemu_driver.c: In function 'qemuDomainSetInterfaceParameters':
./conf/domain_conf.h:3397:1: error: inlining failed in call to
'virDomainNetTypeSharesHostView': call is unlikely and code size would
grow [-Werror=inline]
 virDomainNetTypeSharesHostView(const virDomainNetDef *net)
 ^

Signed-off-by: Jiri Denemark 
---

Notes:
Shouldn't we just kill all (or most of) our worthless usage of the
inline keyword?

 src/conf/domain_conf.c   | 36 
 src/conf/domain_conf.h   | 37 +++--
 src/libvirt_private.syms |  1 +
 3 files changed, 40 insertions(+), 34 deletions(-)

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 77c20c697..394afb0d8 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -28082,3 +28082,39 @@ virDomainGenerateMachineName(const char *drivername,
 virBufferCheckError();
 return virBufferContentAndReset();
 }
+
+
+/**
+ * virDomainNetTypeSharesHostView:
+ * @net: interface
+ *
+ * Some types of interfaces "share" the host view. For instance,
+ * for macvtap interface, every domain RX is the host RX too. And
+ * every domain TX is host TX too. IOW, for some types of
+ * interfaces guest and host are on the same side of RX/TX
+ * barrier. This is important so that we set up QoS correctly and
+ * report proper stats.
+ */
+bool
+virDomainNetTypeSharesHostView(const virDomainNetDef *net)
+{
+virDomainNetType actualType = virDomainNetGetActualType(net);
+switch (actualType) {
+case VIR_DOMAIN_NET_TYPE_DIRECT:
+case VIR_DOMAIN_NET_TYPE_ETHERNET:
+return true;
+case VIR_DOMAIN_NET_TYPE_USER:
+case VIR_DOMAIN_NET_TYPE_VHOSTUSER:
+case VIR_DOMAIN_NET_TYPE_SERVER:
+case VIR_DOMAIN_NET_TYPE_CLIENT:
+case VIR_DOMAIN_NET_TYPE_MCAST:
+case VIR_DOMAIN_NET_TYPE_NETWORK:
+case VIR_DOMAIN_NET_TYPE_BRIDGE:
+case VIR_DOMAIN_NET_TYPE_INTERNAL:
+case VIR_DOMAIN_NET_TYPE_HOSTDEV:
+case VIR_DOMAIN_NET_TYPE_UDP:
+case VIR_DOMAIN_NET_TYPE_LAST:
+break;
+}
+return false;
+}
diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
index 38de70b15..171f34078 100644
--- a/src/conf/domain_conf.h
+++ b/src/conf/domain_conf.h
@@ -3382,40 +3382,9 @@ virDomainGenerateMachineName(const char *drivername,
  int id,
  const char *name,
  bool privileged);
-/**
- * virDomainNetTypeSharesHostView:
- * @net: interface
- *
- * Some types of interfaces "share" the host view. For instance,
- * for macvtap interface, every domain RX is the host RX too. And
- * every domain TX is host TX too. IOW, for some types of
- * interfaces guest and host are on the same side of RX/TX
- * barrier. This is important so that we set up QoS correctly and
- * report proper stats.
- */
-static inline bool
-virDomainNetTypeSharesHostView(const virDomainNetDef *net)
-{
-virDomainNetType actualType = virDomainNetGetActualType(net);
-switch (actualType) {
-case VIR_DOMAIN_NET_TYPE_DIRECT:
-case VIR_DOMAIN_NET_TYPE_ETHERNET:
-return true;
-case VIR_DOMAIN_NET_TYPE_USER:
-case VIR_DOMAIN_NET_TYPE_VHOSTUSER:
-case VIR_DOMAIN_NET_TYPE_SERVER:
-case VIR_DOMAIN_NET_TYPE_CLIENT:
-case VIR_DOMAIN_NET_TYPE_MCAST:
-case VIR_DOMAIN_NET_TYPE_NETWORK:
-case VIR_DOMAIN_NET_TYPE_BRIDGE:
-case VIR_DOMAIN_NET_TYPE_INTERNAL:
-case VIR_DOMAIN_NET_TYPE_HOSTDEV:
-case VIR_DOMAIN_NET_TYPE_UDP:
-case VIR_DOMAIN_NET_TYPE_LAST:
-break;
-}
-return false;
-}
+
+bool
+virDomainNetTypeSharesHostView(const virDomainNetDef *net);
 
 bool
 virDomainDefLifecycleActionAllowed(virDomainLifecycle type,
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index 448d962b2..811d9053e 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -444,6 +444,7 @@ virDomainNetInsert;
 virDomainNetRemove;
 virDomainNetRemoveHostdev;
 virDomainNetTypeFromString;
+virDomainNetTypeSharesHostView;
 virDomainNetTypeToString;
 virDomainNostateReasonTypeFromString;
 virDomainNostateReasonTypeToString;
-- 
2.14.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 3/3] cputest: Add data for Intel(R) Core(TM) i7-7700 CPU

2017-11-02 Thread Jiri Denemark
Signed-off-by: Jiri Denemark 
---
 tests/cputest.c|   1 +
 .../x86_64-cpuid-Core-i7-7700-disabled.xml |   6 +
 .../x86_64-cpuid-Core-i7-7700-enabled.xml  |   9 +
 .../x86_64-cpuid-Core-i7-7700-guest.xml|  25 ++
 .../cputestdata/x86_64-cpuid-Core-i7-7700-host.xml |  26 ++
 .../cputestdata/x86_64-cpuid-Core-i7-7700-json.xml |   9 +
 tests/cputestdata/x86_64-cpuid-Core-i7-7700.json   | 499 +
 tests/cputestdata/x86_64-cpuid-Core-i7-7700.xml|  47 ++
 8 files changed, 622 insertions(+)
 create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-7700-disabled.xml
 create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-7700-enabled.xml
 create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-7700-guest.xml
 create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-7700-host.xml
 create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-7700-json.xml
 create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-7700.json
 create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-7700.xml

diff --git a/tests/cputest.c b/tests/cputest.c
index 8a5316a88..a90ce01d1 100644
--- a/tests/cputest.c
+++ b/tests/cputest.c
@@ -1174,6 +1174,7 @@ mymain(void)
 DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i7-4510U", JSON_HOST);
 DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i7-5600U", JSON_HOST);
 DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i7-5600U-arat", JSON_HOST);
+DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i7-7700", JSON_MODELS);
 DO_TEST_CPUID(VIR_ARCH_X86_64, "Core2-E6850", JSON_HOST);
 DO_TEST_CPUID(VIR_ARCH_X86_64, "Core2-Q9500", JSON_NONE);
 DO_TEST_CPUID(VIR_ARCH_X86_64, "EPYC-7601-32-Core", JSON_HOST);
diff --git a/tests/cputestdata/x86_64-cpuid-Core-i7-7700-disabled.xml 
b/tests/cputestdata/x86_64-cpuid-Core-i7-7700-disabled.xml
new file mode 100644
index 0..fcdcc97e0
--- /dev/null
+++ b/tests/cputestdata/x86_64-cpuid-Core-i7-7700-disabled.xml
@@ -0,0 +1,6 @@
+
+
+  
+  
+  
+
diff --git a/tests/cputestdata/x86_64-cpuid-Core-i7-7700-enabled.xml 
b/tests/cputestdata/x86_64-cpuid-Core-i7-7700-enabled.xml
new file mode 100644
index 0..9096a633a
--- /dev/null
+++ b/tests/cputestdata/x86_64-cpuid-Core-i7-7700-enabled.xml
@@ -0,0 +1,9 @@
+
+
+  
+  
+  
+  
+  
+  
+
diff --git a/tests/cputestdata/x86_64-cpuid-Core-i7-7700-guest.xml 
b/tests/cputestdata/x86_64-cpuid-Core-i7-7700-guest.xml
new file mode 100644
index 0..c3561d597
--- /dev/null
+++ b/tests/cputestdata/x86_64-cpuid-Core-i7-7700-guest.xml
@@ -0,0 +1,25 @@
+
+  Skylake-Client
+  Intel
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+
diff --git a/tests/cputestdata/x86_64-cpuid-Core-i7-7700-host.xml 
b/tests/cputestdata/x86_64-cpuid-Core-i7-7700-host.xml
new file mode 100644
index 0..c799394ea
--- /dev/null
+++ b/tests/cputestdata/x86_64-cpuid-Core-i7-7700-host.xml
@@ -0,0 +1,26 @@
+
+  x86_64
+  Skylake-Client
+  Intel
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+
diff --git a/tests/cputestdata/x86_64-cpuid-Core-i7-7700-json.xml 
b/tests/cputestdata/x86_64-cpuid-Core-i7-7700-json.xml
new file mode 100644
index 0..12424bc67
--- /dev/null
+++ b/tests/cputestdata/x86_64-cpuid-Core-i7-7700-json.xml
@@ -0,0 +1,9 @@
+
+  Skylake-Client
+  Intel
+  
+  
+  
+  
+  
+
diff --git a/tests/cputestdata/x86_64-cpuid-Core-i7-7700.json 
b/tests/cputestdata/x86_64-cpuid-Core-i7-7700.json
new file mode 100644
index 0..5946a64c9
--- /dev/null
+++ b/tests/cputestdata/x86_64-cpuid-Core-i7-7700.json
@@ -0,0 +1,499 @@
+{
+  "return": {
+"model": {
+  "name": "base",
+  "props": {
+"pfthreshold": false,
+"pku": false,
+"rtm": true,
+"tsc_adjust": true,
+"tsc-deadline": true,
+"xstore-en": false,
+"cpuid-0xb": true,
+"abm": true,
+"ia64": false,
+"kvm-mmu": false,
+"xsaveopt": true,
+"hv-spinlocks": -1,
+"tce": false,
+"realized": false,
+"kvm_steal_time": true,
+"smep": true,
+"fpu": true,
+"xcrypt": false,
+"sse4_2": true,
+"clflush": true,
+"sse4_1": true,
+"flushbyasid": false,
+"kvm-steal-time": true,
+"lm": true,
+"tsc": true,
+"adx": true,
+"fxsr": true,
+"sha-ni": false,
+"decodeassists": false,
+"hv-relaxed": false,
+"pclmuldq": true,
+"xgetbv1": true,
+"xstore": false,
+"vmcb_clean": false,
+"tsc-adjust": true,
+"vme": true,
+"vendor": "GenuineIntel",
+"arat": true,
+"ffxsr": false,
+"de": true,
+"aes": true,
+"pse": true,
+"ds-cpl": false,
+"fxsr_opt": false,
+"tbm": false,
+"sse": true,
+"phe-en": false,
+"f16c": true,
+"ds": false,
+"mpx": 

[libvirt] [PATCH 2/3] cputest: Add data for Intel(R) Xeon(R) CPU E5-2650 v4

2017-11-02 Thread Jiri Denemark
Signed-off-by: Jiri Denemark 
---
 tests/cputest.c|   1 +
 .../x86_64-cpuid-Xeon-E5-2650-v4-disabled.xml  |   6 +
 .../x86_64-cpuid-Xeon-E5-2650-v4-enabled.xml   |   9 +
 .../x86_64-cpuid-Xeon-E5-2650-v4-guest.xml |  30 ++
 .../x86_64-cpuid-Xeon-E5-2650-v4-host.xml  |  34 ++
 .../x86_64-cpuid-Xeon-E5-2650-v4-json.xml  |  11 +
 .../cputestdata/x86_64-cpuid-Xeon-E5-2650-v4.json  | 505 +
 tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4.xml |  43 ++
 8 files changed, 639 insertions(+)
 create mode 100644 tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-disabled.xml
 create mode 100644 tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-enabled.xml
 create mode 100644 tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-guest.xml
 create mode 100644 tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-host.xml
 create mode 100644 tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-json.xml
 create mode 100644 tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4.json
 create mode 100644 tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4.xml

diff --git a/tests/cputest.c b/tests/cputest.c
index 9c97f75a2..8a5316a88 100644
--- a/tests/cputest.c
+++ b/tests/cputest.c
@@ -1189,6 +1189,7 @@ mymain(void)
 DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-E3-1245-v5", JSON_MODELS);
 DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-E5-2630-v3", JSON_HOST);
 DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-E5-2650-v3", JSON_HOST);
+DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-E5-2650-v4", JSON_MODELS);
 DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-E7-4820", JSON_HOST);
 DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-E7-4830", JSON_MODELS_REQUIRED);
 DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-E7-8890-v3", JSON_MODELS);
diff --git a/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-disabled.xml 
b/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-disabled.xml
new file mode 100644
index 0..aacc7a2b1
--- /dev/null
+++ b/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-disabled.xml
@@ -0,0 +1,6 @@
+
+
+  
+  
+  
+
diff --git a/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-enabled.xml 
b/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-enabled.xml
new file mode 100644
index 0..b11dd9b75
--- /dev/null
+++ b/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-enabled.xml
@@ -0,0 +1,9 @@
+
+
+  
+  
+  
+  
+  
+  
+
diff --git a/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-guest.xml 
b/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-guest.xml
new file mode 100644
index 0..2fac54355
--- /dev/null
+++ b/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-guest.xml
@@ -0,0 +1,30 @@
+
+  Skylake-Client
+  Intel
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+
diff --git a/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-host.xml 
b/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-host.xml
new file mode 100644
index 0..f482864a9
--- /dev/null
+++ b/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-host.xml
@@ -0,0 +1,34 @@
+
+  x86_64
+  Broadwell
+  Intel
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+  
+
diff --git a/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-json.xml 
b/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-json.xml
new file mode 100644
index 0..5dfce947b
--- /dev/null
+++ b/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4-json.xml
@@ -0,0 +1,11 @@
+
+  Skylake-Client
+  Intel
+  
+  
+  
+  
+  
+  
+  
+
diff --git a/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4.json 
b/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4.json
new file mode 100644
index 0..798e0dff2
--- /dev/null
+++ b/tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4.json
@@ -0,0 +1,505 @@
+{
+  "return": {
+"model": {
+  "name": "base",
+  "props": {
+"pfthreshold": false,
+"pku": false,
+"rtm": true,
+"tsc_adjust": true,
+"tsc-deadline": true,
+"xstore-en": false,
+"cpuid-0xb": true,
+"abm": true,
+"ia64": false,
+"kvm-mmu": false,
+"xsaveopt": true,
+"hv-spinlocks": -1,
+"tce": false,
+"realized": false,
+"kvm_steal_time": true,
+"smep": true,
+"fpu": true,
+"xcrypt": false,
+"sse4_2": true,
+"clflush": true,
+"sse4_1": true,
+"flushbyasid": false,
+"kvm-steal-time": true,
+"lm": true,
+"tsc": true,
+"adx": true,
+"fxsr": true,
+"sha-ni": false,
+"decodeassists": false,
+"hv-relaxed": false,
+"pclmuldq": true,
+"xgetbv1": false,
+"xstore": false,
+"vmcb_clean": false,
+"tsc-adjust": true,
+"vme": true,
+"vendor": "GenuineIntel",
+"arat": true,
+"ffxsr": false,
+"de": true,
+"aes": true,
+"pse": true,
+"ds-cpl": false,
+

[libvirt] [PATCH 0/3] cputest: Add more CPU data files

2017-11-02 Thread Jiri Denemark
Jiri Denemark (3):
  cputest: Do not drop v[0-9] from CPU names
  cputest: Add data for Intel(R) Xeon(R) CPU E5-2650 v4
  cputest: Add data for Intel(R) Core(TM) i7-7700 CPU

 tests/cputest.c|  10 +-
 tests/cputestdata/cpu-parse.sh |   2 +-
 .../x86_64-cpuid-Core-i7-7700-disabled.xml |   6 +
 .../x86_64-cpuid-Core-i7-7700-enabled.xml  |   9 +
 ...est.xml => x86_64-cpuid-Core-i7-7700-guest.xml} |   0
 ...host.xml => x86_64-cpuid-Core-i7-7700-host.xml} |   0
 .../cputestdata/x86_64-cpuid-Core-i7-7700-json.xml |   9 +
 tests/cputestdata/x86_64-cpuid-Core-i7-7700.json   | 499 
 tests/cputestdata/x86_64-cpuid-Core-i7-7700.xml|  47 ++
 ...l => x86_64-cpuid-Xeon-E3-1245-v5-disabled.xml} |   0
 ...ml => x86_64-cpuid-Xeon-E3-1245-v5-enabled.xml} |   0
 .../x86_64-cpuid-Xeon-E3-1245-v5-guest.xml |  25 +
 .../x86_64-cpuid-Xeon-E3-1245-v5-host.xml  |  26 ++
 ...n.xml => x86_64-cpuid-Xeon-E3-1245-v5-json.xml} |   0
 ...1245.json => x86_64-cpuid-Xeon-E3-1245-v5.json} |   0
 ...3-1245.xml => x86_64-cpuid-Xeon-E3-1245-v5.xml} |   0
 ...l => x86_64-cpuid-Xeon-E5-2630-v3-disabled.xml} |   0
 ...ml => x86_64-cpuid-Xeon-E5-2630-v3-enabled.xml} |   0
 xml => x86_64-cpuid-Xeon-E5-2630-v3-guest.xml} |   0
 ...t.xml => x86_64-cpuid-Xeon-E5-2630-v3-host.xml} |   0
 ...n.xml => x86_64-cpuid-Xeon-E5-2630-v3-json.xml} |   0
 ...2630.json => x86_64-cpuid-Xeon-E5-2630-v3.json} |   0
 ...5-2630.xml => x86_64-cpuid-Xeon-E5-2630-v3.xml} |   0
 ...l => x86_64-cpuid-Xeon-E5-2650-v3-disabled.xml} |   0
 ...ml => x86_64-cpuid-Xeon-E5-2650-v3-enabled.xml} |   0
 xml => x86_64-cpuid-Xeon-E5-2650-v3-guest.xml} |   0
 ...t.xml => x86_64-cpuid-Xeon-E5-2650-v3-host.xml} |   0
 ...n.xml => x86_64-cpuid-Xeon-E5-2650-v3-json.xml} |   0
 ...2650.json => x86_64-cpuid-Xeon-E5-2650-v3.json} |   0
 ...5-2650.xml => x86_64-cpuid-Xeon-E5-2650-v3.xml} |   0
 ...l => x86_64-cpuid-Xeon-E5-2650-v4-disabled.xml} |   0
 .../x86_64-cpuid-Xeon-E5-2650-v4-enabled.xml   |   9 +
 .../x86_64-cpuid-Xeon-E5-2650-v4-guest.xml |  30 ++
 .../x86_64-cpuid-Xeon-E5-2650-v4-host.xml  |  34 ++
 .../x86_64-cpuid-Xeon-E5-2650-v4-json.xml  |  11 +
 .../cputestdata/x86_64-cpuid-Xeon-E5-2650-v4.json  | 505 +
 tests/cputestdata/x86_64-cpuid-Xeon-E5-2650-v4.xml |  43 ++
 .../x86_64-cpuid-Xeon-E7-8890-v3-disabled.xml  |   6 +
 ...ml => x86_64-cpuid-Xeon-E7-8890-v3-enabled.xml} |   0
 xml => x86_64-cpuid-Xeon-E7-8890-v3-guest.xml} |   0
 ...t.xml => x86_64-cpuid-Xeon-E7-8890-v3-host.xml} |   0
 ...n.xml => x86_64-cpuid-Xeon-E7-8890-v3-json.xml} |   0
 ...8890.json => x86_64-cpuid-Xeon-E7-8890-v3.json} |   0
 ...7-8890.xml => x86_64-cpuid-Xeon-E7-8890-v3.xml} |   0
 44 files changed, 1266 insertions(+), 5 deletions(-)
 create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-7700-disabled.xml
 create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-7700-enabled.xml
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E3-1245-guest.xml => 
x86_64-cpuid-Core-i7-7700-guest.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E3-1245-host.xml => 
x86_64-cpuid-Core-i7-7700-host.xml} (100%)
 create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-7700-json.xml
 create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-7700.json
 create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-7700.xml
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E3-1245-disabled.xml => 
x86_64-cpuid-Xeon-E3-1245-v5-disabled.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E3-1245-enabled.xml => 
x86_64-cpuid-Xeon-E3-1245-v5-enabled.xml} (100%)
 create mode 100644 tests/cputestdata/x86_64-cpuid-Xeon-E3-1245-v5-guest.xml
 create mode 100644 tests/cputestdata/x86_64-cpuid-Xeon-E3-1245-v5-host.xml
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E3-1245-json.xml => 
x86_64-cpuid-Xeon-E3-1245-v5-json.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E3-1245.json => 
x86_64-cpuid-Xeon-E3-1245-v5.json} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E3-1245.xml => 
x86_64-cpuid-Xeon-E3-1245-v5.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2630-disabled.xml => 
x86_64-cpuid-Xeon-E5-2630-v3-disabled.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2630-enabled.xml => 
x86_64-cpuid-Xeon-E5-2630-v3-enabled.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2630-guest.xml => 
x86_64-cpuid-Xeon-E5-2630-v3-guest.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2630-host.xml => 
x86_64-cpuid-Xeon-E5-2630-v3-host.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2630-json.xml => 
x86_64-cpuid-Xeon-E5-2630-v3-json.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2630.json => 
x86_64-cpuid-Xeon-E5-2630-v3.json} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2630.xml => 
x86_64-cpuid-Xeon-E5-2630-v3.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2650-disabled.xml 

[libvirt] [PATCH 1/3] cputest: Do not drop v[0-9] from CPU names

2017-11-02 Thread Jiri Denemark
Version is a significant part of some Xeon CPUs.

Signed-off-by: Jiri Denemark 
---
 tests/cputest.c   | 8 
 tests/cputestdata/cpu-parse.sh| 2 +-
 ...245-disabled.xml => x86_64-cpuid-Xeon-E3-1245-v5-disabled.xml} | 0
 ...-1245-enabled.xml => x86_64-cpuid-Xeon-E3-1245-v5-enabled.xml} | 0
 ...n-E3-1245-guest.xml => x86_64-cpuid-Xeon-E3-1245-v5-guest.xml} | 0
 ...eon-E3-1245-host.xml => x86_64-cpuid-Xeon-E3-1245-v5-host.xml} | 0
 ...eon-E3-1245-json.xml => x86_64-cpuid-Xeon-E3-1245-v5-json.xml} | 0
 ...-cpuid-Xeon-E3-1245.json => x86_64-cpuid-Xeon-E3-1245-v5.json} | 0
 ...64-cpuid-Xeon-E3-1245.xml => x86_64-cpuid-Xeon-E3-1245-v5.xml} | 0
 ...630-disabled.xml => x86_64-cpuid-Xeon-E5-2630-v3-disabled.xml} | 0
 ...-2630-enabled.xml => x86_64-cpuid-Xeon-E5-2630-v3-enabled.xml} | 0
 ...n-E5-2630-guest.xml => x86_64-cpuid-Xeon-E5-2630-v3-guest.xml} | 0
 ...eon-E5-2630-host.xml => x86_64-cpuid-Xeon-E5-2630-v3-host.xml} | 0
 ...eon-E5-2630-json.xml => x86_64-cpuid-Xeon-E5-2630-v3-json.xml} | 0
 ...-cpuid-Xeon-E5-2630.json => x86_64-cpuid-Xeon-E5-2630-v3.json} | 0
 ...64-cpuid-Xeon-E5-2630.xml => x86_64-cpuid-Xeon-E5-2630-v3.xml} | 0
 ...650-disabled.xml => x86_64-cpuid-Xeon-E5-2650-v3-disabled.xml} | 0
 ...-2650-enabled.xml => x86_64-cpuid-Xeon-E5-2650-v3-enabled.xml} | 0
 ...n-E5-2650-guest.xml => x86_64-cpuid-Xeon-E5-2650-v3-guest.xml} | 0
 ...eon-E5-2650-host.xml => x86_64-cpuid-Xeon-E5-2650-v3-host.xml} | 0
 ...eon-E5-2650-json.xml => x86_64-cpuid-Xeon-E5-2650-v3-json.xml} | 0
 ...-cpuid-Xeon-E5-2650.json => x86_64-cpuid-Xeon-E5-2650-v3.json} | 0
 ...64-cpuid-Xeon-E5-2650.xml => x86_64-cpuid-Xeon-E5-2650-v3.xml} | 0
 ...890-disabled.xml => x86_64-cpuid-Xeon-E7-8890-v3-disabled.xml} | 0
 ...-8890-enabled.xml => x86_64-cpuid-Xeon-E7-8890-v3-enabled.xml} | 0
 ...n-E7-8890-guest.xml => x86_64-cpuid-Xeon-E7-8890-v3-guest.xml} | 0
 ...eon-E7-8890-host.xml => x86_64-cpuid-Xeon-E7-8890-v3-host.xml} | 0
 ...eon-E7-8890-json.xml => x86_64-cpuid-Xeon-E7-8890-v3-json.xml} | 0
 ...-cpuid-Xeon-E7-8890.json => x86_64-cpuid-Xeon-E7-8890-v3.json} | 0
 ...64-cpuid-Xeon-E7-8890.xml => x86_64-cpuid-Xeon-E7-8890-v3.xml} | 0
 30 files changed, 5 insertions(+), 5 deletions(-)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E3-1245-disabled.xml => 
x86_64-cpuid-Xeon-E3-1245-v5-disabled.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E3-1245-enabled.xml => 
x86_64-cpuid-Xeon-E3-1245-v5-enabled.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E3-1245-guest.xml => 
x86_64-cpuid-Xeon-E3-1245-v5-guest.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E3-1245-host.xml => 
x86_64-cpuid-Xeon-E3-1245-v5-host.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E3-1245-json.xml => 
x86_64-cpuid-Xeon-E3-1245-v5-json.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E3-1245.json => 
x86_64-cpuid-Xeon-E3-1245-v5.json} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E3-1245.xml => 
x86_64-cpuid-Xeon-E3-1245-v5.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2630-disabled.xml => 
x86_64-cpuid-Xeon-E5-2630-v3-disabled.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2630-enabled.xml => 
x86_64-cpuid-Xeon-E5-2630-v3-enabled.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2630-guest.xml => 
x86_64-cpuid-Xeon-E5-2630-v3-guest.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2630-host.xml => 
x86_64-cpuid-Xeon-E5-2630-v3-host.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2630-json.xml => 
x86_64-cpuid-Xeon-E5-2630-v3-json.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2630.json => 
x86_64-cpuid-Xeon-E5-2630-v3.json} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2630.xml => 
x86_64-cpuid-Xeon-E5-2630-v3.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2650-disabled.xml => 
x86_64-cpuid-Xeon-E5-2650-v3-disabled.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2650-enabled.xml => 
x86_64-cpuid-Xeon-E5-2650-v3-enabled.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2650-guest.xml => 
x86_64-cpuid-Xeon-E5-2650-v3-guest.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2650-host.xml => 
x86_64-cpuid-Xeon-E5-2650-v3-host.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2650-json.xml => 
x86_64-cpuid-Xeon-E5-2650-v3-json.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2650.json => 
x86_64-cpuid-Xeon-E5-2650-v3.json} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E5-2650.xml => 
x86_64-cpuid-Xeon-E5-2650-v3.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E7-8890-disabled.xml => 
x86_64-cpuid-Xeon-E7-8890-v3-disabled.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E7-8890-enabled.xml => 
x86_64-cpuid-Xeon-E7-8890-v3-enabled.xml} (100%)
 rename tests/cputestdata/{x86_64-cpuid-Xeon-E7-8890-guest.xml => 
x86_64-cpuid-Xeon-E7-8890-v3-guest.xml} (100%)
 

[libvirt] [PATCH] cpu_map: Add cqm alternative name for cmt

2017-11-02 Thread Jiri Denemark
Linux kernel shows our "cmt" feature as "cqm". Let's mention the name in
the cpu_map.xml to make it easier to find.

Signed-off-by: Jiri Denemark 
---
 src/cpu/cpu_map.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/cpu/cpu_map.xml b/src/cpu/cpu_map.xml
index 96f4ce60d..e5da7a887 100644
--- a/src/cpu/cpu_map.xml
+++ b/src/cpu/cpu_map.xml
@@ -224,7 +224,7 @@
 
   
 
-
+ 
   
 
 
-- 
2.14.3

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] Release of libvirt-3.9.0

2017-11-02 Thread Daniel Veillard
  It's out ! I tagged it on git and pushed signed tarball and rpms to
the usual place:

   ftp://libvirt.org/libvirt/

I also made a 3.9.0 release of libvirt-python but it's virtually equivalent
to 3.8.0 as no commit were made last month in that module.

People are strongly encouraged to upgrade, as 3.9.0 includes a security
fix, there is also a reasonable amount of user visible new features,
improvement and bug fixes as usual:


Security:

- qemu: Ensure TLS clients always verify the server certificate
While it's reasonable to turn off client certificate validation, as
setting it up can be non-trivial, clients should always verify the
server certificate to avoid MITM attacks. However, libvirt was using
the same knob to control both checks, leading to CVE-2017-1000256 /
LSN-2017-0002.

New features:

- Add capability to allow hot (un)plug of a domain watchdog device

- Allow users to set device aliases
Users can set aliases to domain devices and thus identify them easily.

- qemu: Support multiqueue for virtio-blk
Multiqueue support for virtio-blk has been available in QEMU ever since
2.7.0, and now libvirt guests can enable it.

- Add virDomainSetLifecycleAction API
Provided a new API to allow dynamic guest lifecycle control for guest
reactions to poweroff, restart, or crash type events related to the
domain XML on_poweroff, on_reboot, and on_crash elements. The virsh
set-lifecycle-action command was created to control the actions.

- qemu: Allow cold(un)plugging and hot(un)plugging input devices

- net: Implement QoS for vhostuser

Improvements:

- Allow a logical volume to be create using LUKS
A logical volume may be created using an encryption element using
"luks" format. This does require a previously created secret to store
the passphrase used to encrypt the volume Adding the volume to a domain
can then either provide the secret or allow the consumer in the guest
to provide the passphrase in order to decrypt the volume.

- net: Ignore auto-generated MAC address when detaching an interface
If the MAC address has not been specified by the user, libvirt will try
and fill in the gaps by generating one; however, for some error paths
that led to some confusing error messages, so when an auto-generated
MAC address is specified the error message will not include the
auto-generated MAC.

- net: Enable MAC address lookup for virDomainInterfaceStats

- apparmor: Several improvements
Changes include permitting access to data about USB devices and dnsmasq
instances, allowing spaces in guest names and many more.

- cpu: Use CPU information obtained from QEMU when possible
Recent QEMU versions can expose information about which CPU models are
available and usable on the host; libvirt will now make use of such
information whenever possible.

- hyperv: Various improvements
The error reported when clients can't connect to Hyper-V has been made
more descriptive, and memory limits for guests are now mapped to more
appropriate libvirt equivalents.

- qemu: Report QEMU error on failed migration
Instead of reporting a generic error, ask QEMU for a more detailed and
thus hopefully more helpful one.

- vbox: Implement autoport for RDP
libvirt will now obtain the (dynamically allocated) RDP port number
from VirtualBox itself, avoiding conflicts between multiple guests
wanting to use RDP at the same time.

- qemu: Allow rotation of small logs
On a host where numerous unique instances are executed per day, it's
quite possible that, even though each of the single log files are
fairly small, collectively the quantity and volume may add tens of
thousands of log files to the /var/log/libvirt/qemu/ directory.
Removing the constraints that log have to be bigger than 100 KiB before
they can be rotated solves the issue.

Bug fixes:

- Fix swapped interface statistics and QoS
Due to internal implementation, reported statistics for some types of
interfaces were swapped (RX appeared in TX and vice versa). Similarly,
QoS was set in reversed way.

- Properly resize local LUKS encrypted volume
Resizing of a local LUKS encrypted volume will now use qemu-img to
resize the volume. This will require configuring a secret for the LUKS
encrypted volume.

- qemu: Reserve PCI addresses for implicit i440fx devices
Failing to do so causes the addresses to be considered usable by
libvirt, which means they could be assigned to more than one device
resulting in the guest failing to start.

- spec: Restart libvirtd only at the end of the upgrade process
Use %posttrans to make sure libvirtd is not restarted before all other
components, such as the library itself and storage / hypervisor
drivers, have already been upgraded.

  Thanks everybody for your help with this release, be it with patches,
bug reports, ideas, reviews, docs, etc...

   Enjoy !


Re: [libvirt] [PATCH] remote: Fix libvirtd service memory leak when libvirt client was terminated unexpectly

2017-11-02 Thread Martin Kletzander

On Tue, Oct 24, 2017 at 01:52:28PM +0800, xinhua.Cao wrote:

base on commit 2033e8cc119454bc4273e8a41e66c899c60ba58b and 
fe8f1c8b8650c8ab5e7d27338f4538582651bd14, we solve libvirt coredump problem, 
but it introduce a memory leak sense.


The first one is just a syntax sugar, it introduces no functional change.


the sense follow
1. one client register a domain event such as reboot event
2. and client was terminated unexpectly, then this client will not free at 
libvirtd service program.

remoteDispatchConnectDomainEventCallbackRegisterAny reference the client, but 
when client was terminated before it call deRegisterAny, the reference of 
client will not reduced to zero. so the memory leak take place. this patch will 
deRegister all event when client was close.


Can you elaborate more on how does the client get terminated?  Maybe the problem
is that there is a way to terminate the client and not call the FreeFunc on it
and the fact that it doesn't go through the right cleanup procedure should be
what we should focus on?

Also please wrap the commit message as any other commit.  See `git log` for
reference.


---
daemon/remote.c | 47 +--
1 file changed, 29 insertions(+), 18 deletions(-)

diff --git a/daemon/remote.c b/daemon/remote.c
index 3f7d2d3..2b5a18b 100644
--- a/daemon/remote.c
+++ b/daemon/remote.c
@@ -1686,25 +1686,16 @@ void remoteRelayConnectionClosedEvent(virConnectPtr 
conn ATTRIBUTE_UNUSED, int r
VIR_WARN("unexpected %s event deregister failure", name);   \
}   \
VIR_FREE(eventCallbacks);   \
+neventCallbacks = 0;\


This is OK, ACK to this hunk.  But I think this should be in a separate patch,
probably.


} while (0);

-/*
- * You must hold lock for at least the client
- * We don't free stuff here, merely disconnect the client's
- * network socket & resources.
- * We keep the libvirt connection open until any async
- * jobs have finished, then clean it up elsewhere
- */
-void remoteClientFreeFunc(void *data)
+static void
+remoteFreePrivCallbacks(void *data)


Why is it called Callbacks when it is not passed as a callback anywhere?  Why
does it take void *?  Why does it not have a 'Client' in the name when it
clearly works with a daemonClientPrivate data?


{
struct daemonClientPrivate *priv = data;

/* Deregister event delivery callback */
-if (priv->conn) {
-virIdentityPtr sysident = virIdentityGetSystem();
-
-virIdentitySetCurrent(sysident);
-
+if (priv && priv->conn) {
DEREG_CB(priv->conn, priv->domainEventCallbacks,
 priv->ndomainEventCallbacks,
 virConnectDomainEventDeregisterAny, "domain");
@@ -1723,6 +1714,26 @@ void remoteClientFreeFunc(void *data)
DEREG_CB(priv->conn, priv->qemuEventCallbacks,
 priv->nqemuEventCallbacks,
 virConnectDomainQemuMonitorEventDeregister, "qemu monitor");
+}
+}
+#undef DEREG_CB
+
+/*
+ * You must hold lock for at least the client
+ * We don't free stuff here, merely disconnect the client's
+ * network socket & resources.
+ * We keep the libvirt connection open until any async
+ * jobs have finished, then clean it up elsewhere
+ */
+void remoteClientFreeFunc(void *data)
+{
+struct daemonClientPrivate *priv = data;
+
+if (priv) {
+virIdentityPtr sysident = virIdentityGetSystem();
+
+virIdentitySetCurrent(sysident);
+remoteFreePrivCallbacks(priv);

if (priv->closeRegistered) {
if (virConnectUnregisterCloseCallback(priv->conn,


Why don't you also remove this callback in the new function?  Does the close
event not get propagated when you move it there?


@@ -1734,18 +1745,18 @@ void remoteClientFreeFunc(void *data)

virIdentitySetCurrent(NULL);
virObjectUnref(sysident);
+VIR_FREE(priv);
}
-
-VIR_FREE(priv);
}
-#undef DEREG_CB
-

static void remoteClientCloseFunc(virNetServerClientPtr client)
{
struct daemonClientPrivate *priv = virNetServerClientGetPrivateData(client);

-daemonRemoveAllClientStreams(priv->streams);
+if (priv) {


Can it happen that priv is NULL?  It should only be freed when the client is
freed in which case this function should not be called at all.  This is a
warning light for me, if you encountered priv == NULL in this function, then it
signals that there is probably a problem somewhere else as well.


+daemonRemoveAllClientStreams(priv->streams);
+remoteFreePrivCallbacks(priv);
+}
}



Generally, I'm OK with splitting the Free function to two of them, one doing the
closing part and one freeing the data (similarly to what I suggested in another
thread for virNetDaemonDispose() just now), but this patch does it in a very
weird way.


signature.asc
Description: Digital 

Re: [libvirt] networking problem

2017-11-02 Thread msliu
Hi Daniel,

# ls /usr/share/libvirt/
cpu_map.xml  libvirtLogo.png  schemas

No "network" folder found

# locate default.xml
/etc/libvirt/storage/default.xml
/etc/libvirt/storage/autostart/default.xml
/home/satimis/.config/libvirt/storage/default.xml
/home/satimis/.config/libvirt/storage/autostart/default.xml
/usr/share/gutenprint/5.2/xml/escp2/inputslots/default.xml


Besides
# ls /etc/libvirt/qemu/networks/
autostart  NAT.xml

# ls /etc/libvirt/qemu/networks/autostart/
NAT.xml

There are 2 NAT.xml file of same content

# cat /etc/libvirt/qemu/networks/NAT.xml



  NAT
  2b550fe1-cff0-476e-aabe-f8a239231315
  
  
  
  

  

  



# cat /etc/libvirt/qemu/networks/autostart/NAT.xml



  NAT
  2b550fe1-cff0-476e-aabe-f8a239231315
  
  
  
  

  

  



Regards
SL


 Original Message 
Subject: Re: [libvirt] networking problem
From: "Daniel P. Berrange" 
Date: Thu, November 02, 2017 9:31 am
To: Stefan Hajnoczi 
Cc: ms...@reynoldstocks.com, libvir-list@redhat.com, k...@vger.kernel.org

On Thu, Nov 02, 2017 at 09:24:22AM +, Stefan Hajnoczi wrote:
> On Sun, Oct 29, 2017 at 04:07:09AM -0700, ms...@reynoldstocks.com wrote:
> > I have performed following steps:
> > 
> > $ virsh net-destroy default
> > $ virsh net-undefine default
> > 
> > Now I couldn't start guest with following warning popup:
> > Error starting domain: Network not found: no network with matching name
> > 'default'
> > 
> > Please advise how to re-create 'default' instead of reinstalling all
> > guests.
> 
> This is a libvirt question. I have CCed the libvirt mailing list so you
> can continue discussion there.
> 
> On Red Hat-based distros try reinstalling the
> libvirt-daemon-config-network package. On Debian-based distros try
> reinstalling the libvirt-daemon-system package. This should restore the
> /etc/libvirt/qemu/networks/default.xml file that you are missing.

No need to re-install those RPMs - Libvirt leaves a copy of the stock
XML
file at /usr/share/libvirt/networks/default.xml. So just run

 virsh define /usr/share/libvirt/networks/default.xml
 virsh start default


Regards,
Daniel
-- 
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [jenkins-ci PATCH 0/2] fix libosinfo rpm build

2017-11-02 Thread Andrea Bolognani
On Thu, 2017-11-02 at 16:49 +0100, Pavel Hrdina wrote:
> On Thu, Nov 02, 2017 at 04:32:12PM +0100, Andrea Bolognani wrote:
> > I'm not sure this approach is the best one.
> > 
> > I like the idea of having a place where we can stick common
> > environment variables - for us that mostly means paths, and we
> > should definitely move guest configuration out of its definition
> > in Jenkins and into this repository, as we prototyped yesterday.
> 
> This is a fix of the commit that broke it, I'm not trying to move all
> environment variables out of Jenkins.

I understand that, and I didn't expect you to of course, but I was
considering your changes with that frame of reference and it looked
to me like you were changing some things that would have to be
changed again for that to happen, so I thought it would be better
to go in that direction right away.

> > But here you're still keeping that info local. Moreover, you're
> > moving the VIR_TEST_* variables from check_env to job_env, which
> > means they end up being defined even during regular 'make'. I seem
> > to recall Dan speaking up against that earlier.
> 
> This is not true, the VIR_TEST_* variables are defined only for
> "autotools-check-job", see .

You're right, sorry for the noise.

> > I think we should have:
> > 
> >   global_env - as the name implies, global; for $VIRT_PREFIX and all
> >other paths that affect one or more build jobs, like
> >$OSINFO_SYSTEM_DIR and $MAKE
> 
> I'm not sure whether there is some different way how to do it but I
> would rather use different approach than using these defines since
> you can easily overwrite them and forget to include it.

See below.

> >   build_env  - local; for variables that affect the build step of a
> >single job (can't think of any right now)
> >   test_env   - local; for variables that affect the test suite of a
> >single job, like $VIR_TEST_DEBUG
> 
> I thought about splitting it into {build,test} but we don't need to have
> it that way and it feels like an overkill.

After looking into it better, I agree that splitting them doesn't
make much sense at all. How about local_env instead of job_env, to
complement global_env defined above? Assuming we don't discard my
proposal entirely, that is :)

> > For projects that inherit from anything but generic-*-job,
> > global_env will be included automatically; projects that don't use
> > any of the known build system will have to take care of that
> > themselves, but then again they already have to do that for make_env.
> 
> And this is exactly why I don't want to do it this way, the "global_env"
> needs to be present always, like it is now with all the environment
> variables configured in Jenkins.

Is there any way we can have that guarantee without asking users
of generic-*-job (which shouldn't be many at all, ideally just the
one after we've fixed libvirt-cim) to include it themselves?

I'm really not an expert in jenkins-job-builder but since we're
setting the environment using export in the 'command' section of
the job definition, I don't see much of a way around it.

So while I agree with your concern about forgetting to include
global_env when defining new jobs, I think that it would still be
a better approach than keeping the environment variables in the
per-worker Jenkins configuration, where they are not tracked and
basically entirely invisible to anyone but the CI administrators.

-- 
Andrea Bolognani / Red Hat / Virtualization

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v5 3/3] libvirtd: fix crash on termination

2017-11-02 Thread Martin Kletzander

On Thu, Nov 02, 2017 at 10:49:35AM +0300, Nikolay Shirokovskiy wrote:



On 01.11.2017 21:51, John Ferlan wrote:



On 10/31/2017 02:54 AM, Nikolay Shirokovskiy wrote:



On 30.10.2017 19:21, Martin Kletzander wrote:

On Mon, Oct 30, 2017 at 07:14:39AM -0400, John Ferlan wrote:

From: Nikolay Shirokovskiy 

The problem is incorrect order of qemu driver shutdown and shutdown
of netserver threads that serve client requests (thru qemu driver
particularly).

Net server threads are shutdown upon dispose which is triggered
by last daemon object unref at the end of main function. At the same
time qemu driver is shutdown earlier in virStateCleanup. As a result
netserver threads see invalid driver object in the middle of request
processing.

Let's move shutting down netserver threads earlier to virNetDaemonClose.

Note: order of last daemon unref and virStateCleanup
is introduced in 85c3a182 for a valid reason.



I must say I don't believe that's true.  Reading it back, that patch is
wrong IMHO.  I haven't gone through all the details of it and I don't
remember them from when I rewrote part of it, but the way this should
be handled is different.

The way you can clearly identify such issues is when you see that one
thread determines the validity of data (pointer in this case).  This
must never happen.  That means that the pointer is used from more places
than how many references that object has.  However each of the pointers
for such object should have their own reference.

So the problem is probably somewhere else.


If I understand you correctly we can fix issue in 85c3a182 in
a differenct way. Like adding reference to daemon for every
driver that uses it for shutdown inhibition. It will require to
pass unref function to driver init function or a bit of wild casting
to virObjectPtr for unref. Not sure it is worth it in this case.


caveat: I'm still in a post KVM Forum travel brain fog...

Perhaps a bit more simple than that... Since the 'inhibitCallback' and
the 'inhibitOpaque' are essentially the mechanism that would pass/use
@dmn, then it'd be up to the inhibitCallback to add/remove the reference
with the *given* that the inhibitOpaque is a lockable object anyway...

Thus, virNetDaemonAddShutdownInhibition could use virObjectRef and
virNetDaemonRemoveShutdownInhibition could use virObjectUnref... The
"catch" is that for Shutdown the Unref would need to go after Unlock.

I believe that then would "replicate" the virObjectRef done in
daemonStateInit and the virObjectUnref done at the end of
daemonRunStateInit with respect to "some outside thread" being able to
use @dmn and not have it be Unref'd for the last time at any point in
time until the "consumer" was done with it.

Moving the Unref to after all the StateCleanup calls were done works
because it accomplishesd the same/similar task, but just held onto @dmn
longer because the original implementation didn't properly reference dmn
when it was being used for some other consumer/thread. Of course that
led to other problems which this series is addressing and I'm not clear
yet as to the impact vis-a-vis your StateShutdown series.


Ok, 85c3a182 can be rewritten this way. It is more staightforward to
ref/unref by consumer thread instead of relying on consumer structure
(in this case 85c3a182 relies upon the fact that after virStateCleanup
no hypervisor driver would use @dmn object).



That's what reference counting is for, each consumer should have its reference.


But we still have the issues address by this patch series and state
shutdown series because the order in which @dmn and other objects
will be disposed would be same for 85c3a182 and proposed 85c3a182
replacement.



Let me summarize below, I hope I'll touch on all the points.  It's hard
to follow since I'm currently reading 3 or more threads at the same time :)


Signed-off-by: John Ferlan 
---
src/rpc/virnetdaemon.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/src/rpc/virnetdaemon.c b/src/rpc/virnetdaemon.c
index 8c21414897..33bd8e3b06 100644
--- a/src/rpc/virnetdaemon.c
+++ b/src/rpc/virnetdaemon.c
@@ -881,6 +881,7 @@ virNetDaemonClose(virNetDaemonPtr dmn)
    virObjectLock(dmn);

    virHashForEach(dmn->servers, daemonServerClose, NULL);
+    virHashRemoveAll(dmn->servers);



If I get this correctly, you are removing the servers so that their workers get
cleaned up, but that should be done in a separate step.  Most probably what
daemonServerClose should be doing.  This is a workaround, but not a proper fix.


Are you sure?  The daemonServerClose is the callback for virHashForEach
which means the table->iterating would be set thus preventing
daemonServerClose from performing a virHashRemoveEntry of the server
element from the list.



This just makes the window of opportunity (between daemonServerClose()
and the actual removal of the virNetServerPtr from the hash) smaller.
That's why I don't see it as a fix, rather as a workaround.

What I 

Re: [libvirt] [jenkins-ci PATCH 0/2] fix libosinfo rpm build

2017-11-02 Thread Andrea Bolognani
On Thu, 2017-11-02 at 15:56 +, Daniel P. Berrange wrote:
> On Thu, Nov 02, 2017 at 04:49:38PM +0100, Pavel Hrdina wrote:
> > On Thu, Nov 02, 2017 at 04:32:12PM +0100, Andrea Bolognani wrote:
> > > Moreover, we should be able to convert libvirt-cim to use
> > > autotools-*-job by tweaking its autogen.sh script, which would leave
> > > osinfo-db as the only user of generic-*-job.
> > 
> > +1
> 
> The only reason libvirt-cim didn't use the autotools job was that its
> setup script was called autoconfiscate insteadof autogen.sh. I see that
> Andrea has fixed that crazyness now so we can use the autotools job
> easily enough i expect :-)

There's one more difference still: libvirt-cim's autogen.sh doesn't
automatically call configure, which autotools-*-job expect. Easily
fixed, though.

-- 
Andrea Bolognani / Red Hat / Virtualization

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v5 4/4] xlconfigtest: add tests for numa cell sibling distances

2017-11-02 Thread Wim ten Have
On Wed, 1 Nov 2017 20:45:46 +0100
Wim ten Have  wrote:

> On Thu, 26 Oct 2017 17:51:34 -0600
> Jim Fehlig  wrote:
> 
> > On 10/12/2017 01:31 PM, Wim Ten Have wrote:  
> > > From: Wim ten Have 
> > > 
> > > Test a bidirectional xen-xl domxml to and from native for numa
> > > support administration as brought under this patch series.
> > > 
> > > Signed-off-by: Wim ten Have 
> > > ---
> > >   .../test-fullvirt-vnuma-autocomplete.cfg   | 26 +++
> > >   .../test-fullvirt-vnuma-autocomplete.xml   | 85 
> > > ++
> > >   .../test-fullvirt-vnuma-nodistances.cfg| 26 +++
> > >   .../test-fullvirt-vnuma-nodistances.xml| 53 ++
> > >   .../test-fullvirt-vnuma-partialdist.cfg| 26 +++
> > >   .../test-fullvirt-vnuma-partialdist.xml| 60 +++
> > >   tests/xlconfigdata/test-fullvirt-vnuma.cfg | 26 +++
> > >   tests/xlconfigdata/test-fullvirt-vnuma.xml | 81 
> > > +
> > >   tests/xlconfigtest.c   |  6 ++
> > 
> > Cool! Thanks for the various configurations to test all the hairy parsing 
> > code :-).
> > 
> > Reviewed-by: Jim Fehlig 
> > 
> > BTW I should have mentioned it while reviewing 3/4, but we should also have 
> > tests for the libxl_domain_config generator, now that we have a test suite 
> > for 
> > that. See tests/libxlxml2domconfigtest.c.  
> 
>   There's a nasty issue living here.  Within specific test-harness you seem to
>   get through libxl_ctx_alloc() ... ?!  Running as ordinary user without 
> privileges?!  (ie. not root)
> 
> if (libxl_ctx_alloc(, LIBXL_VERSION, 0, log) < 0) {
> 
>   I'd expected this libxl_ctx_alloc() to have failed with;
> xencall: error: Could not obtain handle on privileged command 
> interface: Permission denied
> libxl: error: libxl.c:108:libxl_ctx_alloc: cannot open libxc handle: 
> Permission denied
> 
>   Funny it seems to go through and the padded memory content seem not to 
> match a
>   real libxl_ctx_alloc().  So it is bringing some kind of ctx structure which 
> seems _NOT_ real or
>   at least incorrect.
> 
>   From I need specific context to determine host/hypervisor phys_info() ... 
> causing
>   my libxl_domain_config added test to die with a SEGV under specific 
> xenlight run-time.
> 
>   (gdb) where
>   #0  0x72e18f41 in xc.hypercall_buffer_alloc () from 
> /lib64/libxenctrl.so.4.8
>   #1  0x72e18f98 in xc.hypercall_bounce_pre () from 
> /lib64/libxenctrl.so.4.8
>   #2  0x72e0b962 in xc_physinfo () from /lib64/libxenctrl.so.4.8
>   #3  0x7792c0bb in libxl_get_physinfo () from 
> /lib64/libxenlight.so.4.8
>   #4  0x00414012 in libxlMakeVnumaList (def=0x66f720, ctx=0x668060, 
> d_config=0x7fffd170)
>   at libxl/libxl_conf.c:621
>   #5  0x00418ca6 in libxlBuildDomainConfig (graphicsports=0x666940, 
> def=0x66f720, channelDir=0x0, 
>   ctx=0x668060, caps=0x669ee0, d_config=0x7fffd170) at 
> libxl/libxl_conf.c:2302
>   #6  0x0040f536 in testCompareXMLToDomConfig (
>   xmlfile=0x666860 
> "/home/wtenhave/WORK/libvirt/vNUMA/libvirt/tests/libxlxml2domconfigdata/basic-hvm.xml",
>  
>   jsonfile=0x666790 
> "/home/wtenhave/WORK/libvirt/vNUMA/libvirt/tests/libxlxml2domconfigdata/basic-hvm.json")
>   at libxlxml2domconfigtest.c:88
>   #7  0x0040f851 in testCompareXMLToDomConfigHelper (data=0x6482f0 
> ) at libxlxml2domconfigtest.c:148
> 
>   Should I escape the libxl_get_physinfo() under test?  It is not really 
> necessary there as whole
>   is academic.  If you have an advise please let me know ... .

  Not a nasty issue and resolved under PATCH v6 submitted today.

Regards.
- Wim.
  


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [jenkins-ci PATCH 0/2] fix libosinfo rpm build

2017-11-02 Thread Daniel P. Berrange
On Thu, Nov 02, 2017 at 04:49:38PM +0100, Pavel Hrdina wrote:
> On Thu, Nov 02, 2017 at 04:32:12PM +0100, Andrea Bolognani wrote:
> > Moreover, we should be able to convert libvirt-cim to use
> > autotools-*-job by tweaking its autogen.sh script, which would leave
> > osinfo-db as the only user of generic-*-job.
> 
> +1

The only reason libvirt-cim didn't use the autotools job was that its
setup script was called autoconfiscate insteadof autogen.sh. I see that
Andrea has fixed that crazyness now so we can use the autotools job
easily enough i expect :-)


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [jenkins-ci PATCH 2/2] jobs: use job_env in all job templates

2017-11-02 Thread Daniel P. Berrange
On Thu, Nov 02, 2017 at 04:54:38PM +0100, Pavel Hrdina wrote:
> On Thu, Nov 02, 2017 at 03:42:56PM +, Daniel P. Berrange wrote:
> > On Thu, Nov 02, 2017 at 01:45:51PM +0100, Pavel Hrdina wrote:
> > > This fixes libosinfo RPM build, we need to set "OSINFO_SYSTEM_DIR" for
> > > RPM build as well since the spec file runs tests and they need valid
> > > osinfo-db.  Broken by <359b069f7e4700bee337588eab0a9d83cac94010>.
> > 
> > I'm not suggesting you fix this right now, but our current RPM build
> > process is very broken by design, because we are completely ignoring
> > dependancies between packages. We've hacked around this by culling the
> > BuildRequires lines from the spec file before running rpmbuild, and
> > relying on a bit of luck to have the rpmbuild find the stuff we just
> > built into  $VIRT_PREFIX. As this test suite problem shows though,
> > this is very fragile.
> 
> I completely agree with it, it's fragile and it uses all the environment
> variables to make sure that all the dependencies can be resolved.
> 
> > One way to fix this is to create a yum repo populated with all RPMs
> > we build during CI. Then setup a mock vroot to do the builds in,
> > which includes this local yum repo. That way we can honour the RPM
> > deps correctly getting a more accurate test of RPM build process.
> > 
> > My main concern with this is that mock is kind of slow to bootstrap
> > the initial vroot, so this would slow our CI somewhat.
> 
> We need to look into it how much it will slow down our CI since we have
> very limited resources.  Maybe it's time to ask whether we can have more
> resources.

I think it is well overdue for us to acquire more hardware for Jenkins
slaves. If CentOS have a 2nd machine they can spare for us then great,
otherwise we should try to push RH to fund some...

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [jenkins-ci PATCH 2/2] jobs: use job_env in all job templates

2017-11-02 Thread Pavel Hrdina
On Thu, Nov 02, 2017 at 03:42:56PM +, Daniel P. Berrange wrote:
> On Thu, Nov 02, 2017 at 01:45:51PM +0100, Pavel Hrdina wrote:
> > This fixes libosinfo RPM build, we need to set "OSINFO_SYSTEM_DIR" for
> > RPM build as well since the spec file runs tests and they need valid
> > osinfo-db.  Broken by <359b069f7e4700bee337588eab0a9d83cac94010>.
> 
> I'm not suggesting you fix this right now, but our current RPM build
> process is very broken by design, because we are completely ignoring
> dependancies between packages. We've hacked around this by culling the
> BuildRequires lines from the spec file before running rpmbuild, and
> relying on a bit of luck to have the rpmbuild find the stuff we just
> built into  $VIRT_PREFIX. As this test suite problem shows though,
> this is very fragile.

I completely agree with it, it's fragile and it uses all the environment
variables to make sure that all the dependencies can be resolved.

> One way to fix this is to create a yum repo populated with all RPMs
> we build during CI. Then setup a mock vroot to do the builds in,
> which includes this local yum repo. That way we can honour the RPM
> deps correctly getting a more accurate test of RPM build process.
> 
> My main concern with this is that mock is kind of slow to bootstrap
> the initial vroot, so this would slow our CI somewhat.

We need to look into it how much it will slow down our CI since we have
very limited resources.  Maybe it's time to ask whether we can have more
resources.

Pavel


signature.asc
Description: PGP signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [jenkins-ci PATCH 0/2] fix libosinfo rpm build

2017-11-02 Thread Pavel Hrdina
On Thu, Nov 02, 2017 at 04:32:12PM +0100, Andrea Bolognani wrote:
> On Thu, 2017-11-02 at 13:45 +0100, Pavel Hrdina wrote:
> > Pavel Hrdina (2):
> >   jobs: rename check_env into job_env
> >   jobs: use job_env in all job templates
> > 
> >  jobs/autotools.yaml  | 5 -
> >  jobs/defaults.yaml   | 2 +-
> >  jobs/perl-makemaker.yaml | 3 +++
> >  projects/libosinfo.yaml  | 2 +-
> >  projects/libvirt.yaml| 2 +-
> >  5 files changed, 10 insertions(+), 4 deletions(-)
> 
> I'm not sure this approach is the best one.
> 
> I like the idea of having a place where we can stick common
> environment variables - for us that mostly means paths, and we
> should definitely move guest configuration out of its definition
> in Jenkins and into this repository, as we prototyped yesterday.

This is a fix of the commit that broke it, I'm not trying to move all
environment variables out of Jenkins.

> But here you're still keeping that info local. Moreover, you're
> moving the VIR_TEST_* variables from check_env to job_env, which
> means they end up being defined even during regular 'make'. I seem
> to recall Dan speaking up against that earlier.

This is not true, the VIR_TEST_* variables are defined only for
"autotools-check-job", see .

> 
> I think we should have:
> 
>   global_env - as the name implies, global; for $VIRT_PREFIX and all
>other paths that affect one or more build jobs, like
>$OSINFO_SYSTEM_DIR and $MAKE

I'm not sure whether there is some different way how to do it but I
would rather use different approach than using these defines since
you can easily overwrite them and forget to include it.

>   build_env  - local; for variables that affect the build step of a
>single job (can't think of any right now)
>   test_env   - local; for variables that affect the test suite of a
>single job, like $VIR_TEST_DEBUG

I thought about splitting it into {build,test} but we don't need to have
it that way and it feels like an overkill.

> For projects that inherit from anything but generic-*-job,
> global_env will be included automatically; projects that don't use
> any of the known build system will have to take care of that
> themselves, but then again they already have to do that for make_env.

And this is exactly why I don't want to do it this way, the "global_env"
needs to be present always, like it is now with all the environment
variables configured in Jenkins.

> Moreover, we should be able to convert libvirt-cim to use
> autotools-*-job by tweaking its autogen.sh script, which would leave
> osinfo-db as the only user of generic-*-job.

+1

Pavel


signature.asc
Description: PGP signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH v6 3/4] libxl: vnuma support

2017-11-02 Thread Wim Ten Have
From: Wim ten Have 

This patch generates a NUMA distance-aware libxl description from the
information extracted from a NUMA distance-aware libvirt XML file.

By default, if no NUMA node distance information is supplied in the
libvirt XML file, this patch uses the distances 10 for local and 20
for remote nodes/sockets."

Signed-off-by: Wim ten Have 
---
Changes on v1:
- Fix function calling (coding) standards.
- Use virReportError(...) under all failing circumstances.
- Reduce redundant (format->parse) code sorting bitmaps.
- Avoid non GNU shorthand notations, difficult code practice.
Changes on v2:
- Have autonomous defaults applied from virDomainNumaGetNodeDistance.
- Automatically make Xen driver simulate vnuma pnodes on non NUMA h/w.
- Compute 'memory unit=' making it the sum of all node memory.
Changes on v4:
- Escape virDomainNumaGetNodeCount() if effective.
Changes on v5:
- User VIR_WARN to warn instead of virReportError() abuse without error.
- Have ACPI set by libxlDomainDefPostParse() when virDomainNumaGetNodeCount
  suggests such for x86_64 under HVM.
- Remove VIR_DOMAIN_DEF_PARSE_ABI_UPDATE.
---
 src/libxl/libxl_conf.c   | 119 +++
 src/libxl/libxl_domain.c |   7 ++-
 2 files changed, 125 insertions(+), 1 deletion(-)

diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c
index ecbabfc79..c0dd634cd 100644
--- a/src/libxl/libxl_conf.c
+++ b/src/libxl/libxl_conf.c
@@ -593,6 +593,120 @@ libxlMakeDomBuildInfo(virDomainDefPtr def,
 return 0;
 }
 
+#ifdef LIBXL_HAVE_VNUMA
+static int
+libxlMakeVnumaList(virDomainDefPtr def,
+   libxl_ctx *ctx,
+   libxl_domain_config *d_config)
+{
+int ret = -1;
+size_t i, j;
+size_t nr_nodes;
+size_t num_vnuma;
+bool simulate = false;
+virBitmapPtr bitmap = NULL;
+virDomainNumaPtr numa = def->numa;
+libxl_domain_build_info *b_info = _config->b_info;
+libxl_physinfo physinfo;
+libxl_vnode_info *vnuma_nodes = NULL;
+
+if (!numa)
+return 0;
+
+num_vnuma = virDomainNumaGetNodeCount(numa);
+if (!num_vnuma)
+return 0;
+
+libxl_physinfo_init();
+if (libxl_get_physinfo(ctx, ) < 0) {
+libxl_physinfo_dispose();
+virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+   _("libxl_get_physinfo_info failed"));
+return -1;
+}
+nr_nodes = physinfo.nr_nodes;
+libxl_physinfo_dispose();
+
+if (num_vnuma > nr_nodes) {
+VIR_WARN("Number of configured numa cells %zu exceeds the physical 
available nodes %zu, guest simulates numa",
+   num_vnuma, nr_nodes);
+simulate = true;
+}
+
+/*
+ * allocate the vnuma_nodes for assignment under b_info.
+ */
+if (VIR_ALLOC_N(vnuma_nodes, num_vnuma) < 0)
+return -1;
+
+/*
+ * parse the vnuma vnodes data.
+ */
+for (i = 0; i < num_vnuma; i++) {
+int cpu;
+libxl_bitmap vcpu_bitmap;
+libxl_vnode_info *p = _nodes[i];
+
+libxl_vnode_info_init(p);
+
+/* pnode */
+p->pnode = simulate ? 0 : i;
+
+/* memory size */
+p->memkb = virDomainNumaGetNodeMemorySize(numa, i);
+
+/* vcpus */
+bitmap = virDomainNumaGetNodeCpumask(numa, i);
+if (bitmap == NULL) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   _("vnuma sibling %zu missing vcpus set"), i);
+goto cleanup;
+}
+
+if ((cpu = virBitmapNextSetBit(bitmap, -1)) < 0)
+goto cleanup;
+
+libxl_bitmap_init(_bitmap);
+if (libxl_cpu_bitmap_alloc(ctx, _bitmap, b_info->max_vcpus)) {
+virReportOOMError();
+goto cleanup;
+}
+
+do {
+libxl_bitmap_set(_bitmap, cpu);
+} while ((cpu = virBitmapNextSetBit(bitmap, cpu)) >= 0);
+
+libxl_bitmap_copy_alloc(ctx, >vcpus, _bitmap);
+libxl_bitmap_dispose(_bitmap);
+
+/* vdistances */
+if (VIR_ALLOC_N(p->distances, num_vnuma) < 0)
+goto cleanup;
+p->num_distances = num_vnuma;
+
+for (j = 0; j < num_vnuma; j++)
+p->distances[j] = virDomainNumaGetNodeDistance(numa, i, j);
+}
+
+b_info->vnuma_nodes = vnuma_nodes;
+b_info->num_vnuma_nodes = num_vnuma;
+
+ret = 0;
+
+ cleanup:
+if (ret) {
+for (i = 0; i < num_vnuma; i++) {
+libxl_vnode_info *p = _nodes[i];
+
+VIR_FREE(p->distances);
+}
+VIR_FREE(vnuma_nodes);
+}
+
+return ret;
+}
+#endif
+
 static int
 libxlDiskSetDiscard(libxl_device_disk *x_disk, int discard)
 {
@@ -2184,6 +2298,11 @@ libxlBuildDomainConfig(virPortAllocatorPtr graphicsports,
 if (libxlMakeDomBuildInfo(def, ctx, caps, d_config) < 0)
 return -1;
 
+#ifdef LIBXL_HAVE_VNUMA
+if (libxlMakeVnumaList(def, ctx, d_config) < 0)
+return -1;
+#endif
+

[libvirt] [PATCH v6 1/4] numa: describe siblings distances within cells

2017-11-02 Thread Wim Ten Have
From: Wim ten Have 

Add support for describing NUMA distances in a domain's  
XML description.

Below is an example of a 4 node setup:

  

  

  
  
  
  

  
  

  
  
  
  

  
  

  
  
  
  

  

  
  
  
  

  

  

A  defines a NUMA node.  describes the NUMA distance
from the  to the other NUMA nodes (the s).  For example,
in above XML description, the distance between NUMA node0  and NUMA node2  is 31.

Valid distance value are '10 <= value <= 255'.  A distance value of 10
represents the distance to the node itself.  A distance value of 20
represents the default value for remote nodes but other values are
possible depending on the physical topology of the system.

When distances are not fully described, any missing sibling distance
values will default to 10 for local nodes and 20 for remote nodes.

If distance is given for A -> B, then we default B -> A to the same
value instead of 20.

Signed-off-by: Wim ten Have 
Reviewed-by: Daniel P. Berrange 
---
Changes on v1:
- Add changes to docs/formatdomain.html.in describing schema update.
Changes on v2:
- Automatically apply distance symmetry maintaining cell <-> sibling.
- Check for maximum '255' on numaDistanceValue.
- Automatically complete empty distance ranges.
- Check that sibling_id's are in range with cell identifiers.
- Allow non-contiguous ranges, starting from any node id.
- Respect parameters as ATTRIBUTE_NONNULL fix functions and callers.
- Add and apply topology for LOCAL_DISTANCE=10 and REMOTE_DISTANCE=20.
Changes on v3:
- Add UNREACHABLE if one locality is unreachable from another.
- Add code cleanup aligning function naming in a separated patch.
- Add numa related driver code in a separated patch.
- Remove  from numaDistanceValue schema/basictypes.rng
- Correct doc changes.
Changes on v4:
- Fix symmetry error under virDomainNumaDefNodeDistanceParseXML()
Changes on v5:
- Apply doc suggestions.
- Apply virReportError() message suggestions.
- Remove redundant sanity checks from virDomainNumaDefNodeDistanceParseXML().
---
 docs/formatdomain.html.in   |  63 ++-
 docs/schemas/basictypes.rng |   7 ++
 docs/schemas/cputypes.rng   |  18 +
 src/conf/numa_conf.c| 191 +++-
 4 files changed, 275 insertions(+), 4 deletions(-)

diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index 4f28dce35..df2f17f5d 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -1529,7 +1529,68 @@
 
 
 
-  This guest NUMA specification is currently available only for QEMU/KVM.
+  This guest NUMA specification is currently available only for
+  QEMU/KVM and Xen.  Whereas Xen driver also allows for a distinct
+  description of NUMA arranged sibling cell
+  distances Since 3.9.0.
+
+
+
+  Under NUMA hardware architecture, distinct resources such as memory
+  create a designated distance between cell and
+  siblings that now can be described with the help of
+  distances. A detailed description can be found within
+  the ACPI (Advanced Configuration and Power Interface Specification)
+  within the chapter explaining the system's SLIT (System Locality
+  Distance Information Table).
+
+
+
+...
+cpu
+  ...
+  numa
+cell id='0' cpus='0,4-7' memory='512000' unit='KiB'
+  distances
+sibling id='0' value='10'/
+sibling id='1' value='21'/
+sibling id='2' value='31'/
+sibling id='3' value='41'/
+  /distances
+/cell
+cell id='1' cpus='1,8-10,12-15' memory='512000' unit='KiB' 
memAccess='shared'
+  distances
+sibling id='0' value='21'/
+sibling id='1' value='10'/
+sibling id='2' value='21'/
+sibling id='3' value='31'/
+  /distances
+/cell
+cell id='2' cpus='2,11' memory='512000' unit='KiB' 
memAccess='shared'
+  distances
+sibling id='0' value='31'/
+sibling id='1' value='21'/
+sibling id='2' value='10'/
+sibling id='3' value='21'/
+  /distances
+/cell
+cell id='3' cpus='3' memory='512000' unit='KiB'
+  distances
+sibling id='0' value='41'/
+sibling id='1' value='31'/
+sibling id='2' value='21'/
+sibling id='3' value='10'/
+  /distances
+/cell
+  /numa
+  ...
+/cpu
+...
+
+
+  Under Xen driver, if no distances are given to describe
+  the SLIT data between different cells, it will default to a scheme
+  using 10 for local and 20 for remote distances.
 
 
 Events configuration
diff --git a/docs/schemas/basictypes.rng b/docs/schemas/basictypes.rng
index 1ea667cdf..1a18cd31b 100644
--- a/docs/schemas/basictypes.rng

[libvirt] [PATCH v6 0/4] numa: describe sibling nodes distances

2017-11-02 Thread Wim Ten Have
From: Wim ten Have 

This patch extends guest domain administration adding support to
advertise node sibling distances when configuring NUMA guests also
referred to as vNUMA (Virtual NUMA).

NUMA (Non-Uniform Memory Access), a method of configuring a cluster
of nodes within a single multiprocessing system such that it shares
processor local memory amongst others improving performance and the
ability of the system to be expanded.

A NUMA system could be illustrated as shown below. Within this 4-NODE
system, every socket is equipped with its own distinct memory and some
with I/O. Access to memory or I/O on remote nodes is only possible
through the "Interconnect". This results in different performance for
local and remote resources.

In contrast to NUMA we recognize the flat SMP system where no concept
of local or remote resource exists.  The disadvantage of high socket
count SMP systems is that the shared bus can easily become a performance
bottleneck under high activity.

+-+---++---+-+
|NODE0|   |   ||   |   |NODE3|
| | CPU00 | CPU03 || CPU12 | CPU15 | |
| |   |   ||   |   | |
| Mem +--- Socket0 ---<>--- Socket3 ---+ Mem |
| |   |   ||   |   | |
+-+ CPU01 | CPU02 || CPU13 | CPU14 | |
| I/O |   |   ||   |   | |
+-+---^---++---^---+-+
  ||
  |  Interconnect  |
  ||
+-v---++---v-+
|NODE1|   |   ||   |   |NODE2|
| | CPU04 | CPU07 || CPU08 | CPU11 | |
| |   |   ||   |   | |
| Mem +--- Socket1 ---<>--- Socket2 ---+ Mem |
| |   |   ||   |   | |
+-+ CPU05 | CPU06 || CPU09 | CPU10 | |
| I/O |   |   ||   |   | |
+-+---+---++---+---+-+

NUMA adds an intermediate level of memory shared amongst a few cores
per socket as illustrated above, so that data accesses do not have to
travel over a single bus.

Unfortunately the way NUMA does this adds its own limitations. This,
as visualized in the illustration above, happens when data is stored in
memory associated with Socket2 and is accessed by a CPU (core) in Socket0.
The processors use the "Interconnect" path to access resource on other
nodes. These "Interconnect" hops add data access delays. It is therefore
in our interest to describe the relative distances between nodes.

The relative distances between nodes are described in the system's SLIT
(System Locality Distance Information Table) which is part of the ACPI
(Advanced Configuration and Power Interface) specification.

On Linux systems the SLIT detail can be listed with help of the
'numactl -H' command. The above guest would show the following output.

[root@f25 ~]# numactl -H
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3
node 0 size: 2007 MB
node 0 free: 1931 MB
node 1 cpus: 4 5 6 7
node 1 size: 1951 MB
node 1 free: 1902 MB
node 2 cpus: 8 9 10 11
node 2 size: 1998 MB
node 2 free: 1910 MB
node 3 cpus: 12 13 14 15
node 3 size: 2015 MB
node 3 free: 1907 MB
node distances:
node   0   1   2   3
  0:  10  21  31  21
  1:  21  10  21  31
  2:  31  21  10  21
  3:  21  31  21  10

These patches extend core libvirt's XML description of NUMA cells to
include NUMA distance information and propagate it to Xen guests via
libxl.  Recently qemu landed support for constructing the SLIT since
commit 0f203430dd ("numa: Allow setting NUMA distance for different NUMA
nodes"). The core libvirt extensions in this patch set could be used to
propagate NUMA distances to qemu quests in the future.

Wim ten Have (4):
  numa: describe siblings distances within cells
  xenconfig: add domxml conversions for xen-xl
  libxl: vnuma support
  xlconfigtest: add tests for numa cell sibling distances

 docs/formatdomain.html.in  |  63 +++-
 docs/schemas/basictypes.rng|   7 +
 docs/schemas/cputypes.rng  |  18 ++
 src/conf/numa_conf.c   | 328 +++-
 src/conf/numa_conf.h   |  25 ++
 src/libvirt_private.syms   |   5 +
 src/libxl/libxl_conf.c | 119 
 src/libxl/libxl_domain.c   |   7 +-
 src/xenconfig/xen_xl.c | 335 +
 tests/libxlxml2domconfigdata/basic-hvm.json|  95 +-
 tests/libxlxml2domconfigdata/basic-hvm.xml |  66 +++-
 

[libvirt] [PATCH v6 2/4] xenconfig: add domxml conversions for xen-xl

2017-11-02 Thread Wim Ten Have
From: Wim ten Have 

This patch converts NUMA configurations between the Xen libxl
configuration file format and libvirt's XML format.

XML HVM domain on a 4 node (2 cores/socket) configuration:

  

  

  
  
  
  

  
  

  
  
  
  

  
  

  
  
  
  

  
  

  
  
  
  

  

  

Xen xl.cfg domain configuration:

  vnuma = [["pnode=0","size=2048","vcpus=0-1","vdistances=10,21,31,21"],
   ["pnode=1","size=2048","vcpus=2-3","vdistances=21,10,21,31"],
   ["pnode=2","size=2048","vcpus=4-5","vdistances=31,21,10,21"],
   ["pnode=3","size=2048","vcpus=6-7","vdistances=21,31,21,10"]]

If there is no XML  description amongst the  data the
conversion schema from xml to native will generate 10 for local and 20
for all remote instances.

Signed-off-by: Wim ten Have 
---
Changes on v2:
- Reduce the indentation level under xenParseXLVnuma().
Changes on v3:
- Add the Numa core split functions required to interface.
Changes on v5:
- Apply suggested cosmetic suggestions.
- Apply virReportError() message suggestions.
- Order prototyping for Getters and Setters under numa_conf.h
- Break function arguments to one parameter per line conform code style.
---
 src/conf/numa_conf.c | 137 +++
 src/conf/numa_conf.h |  25 
 src/libvirt_private.syms |   5 +
 src/xenconfig/xen_xl.c   | 335 +++
 4 files changed, 502 insertions(+)

diff --git a/src/conf/numa_conf.c b/src/conf/numa_conf.c
index 5fbcc7204..7bba4120b 100644
--- a/src/conf/numa_conf.c
+++ b/src/conf/numa_conf.c
@@ -1114,6 +1114,132 @@ virDomainNumaGetNodeCount(virDomainNumaPtr numa)
 }
 
 
+size_t
+virDomainNumaSetNodeCount(virDomainNumaPtr numa, size_t nmem_nodes)
+{
+if (!nmem_nodes) {
+virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+   _("Cannot set an empty mem_nodes set"));
+return 0;
+}
+
+if (numa->mem_nodes) {
+virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+   _("Cannot alter an existing mem_nodes set"));
+return 0;
+}
+
+if (VIR_ALLOC_N(numa->mem_nodes, nmem_nodes) < 0)
+return 0;
+
+numa->nmem_nodes = nmem_nodes;
+
+return numa->nmem_nodes;
+}
+
+size_t
+virDomainNumaGetNodeDistance(virDomainNumaPtr numa,
+ size_t node,
+ size_t cellid)
+{
+virDomainNumaDistancePtr distances = NULL;
+
+if (node < numa->nmem_nodes)
+distances = numa->mem_nodes[node].distances;
+
+/*
+ * Present the configured distance value. If
+ * out of range or not available set the platform
+ * defined default for local and remote nodes.
+ */
+if (!distances ||
+!distances[cellid].value ||
+!numa->mem_nodes[node].ndistances)
+return (node == cellid) ? LOCAL_DISTANCE : REMOTE_DISTANCE;
+
+return distances[cellid].value;
+}
+
+
+int
+virDomainNumaSetNodeDistance(virDomainNumaPtr numa,
+ size_t node,
+ size_t cellid,
+ unsigned int value)
+{
+virDomainNumaDistancePtr distances;
+
+if (node >= numa->nmem_nodes) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   _("Argument 'node' %zu outranges "
+ "defined number of NUMA nodes"),
+   node);
+return -1;
+}
+
+distances = numa->mem_nodes[node].distances;
+if (!distances ||
+cellid >= numa->mem_nodes[node].ndistances) {
+virReportError(VIR_ERR_XML_ERROR, "%s",
+   _("Arguments under memnode element do not "
+ "correspond with existing guest's NUMA cell"));
+return -1;
+}
+
+/*
+ * Advanced Configuration and Power Interface
+ * Specification version 6.1. Chapter 5.2.17
+ * System Locality Distance Information Table
+ * ... Distance values of 0-9 are reserved.
+ */
+if (value < LOCAL_DISTANCE ||
+value > UNREACHABLE) {
+virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+   _("Distance value of %d is not in valid range"),
+   value);
+return -1;
+}
+
+if (value == LOCAL_DISTANCE && node != cellid) {
+virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+   _("Distance value %d under node %zu is "
+ "LOCAL_DISTANCE and should be set to 10"),
+   value, node);
+return -1;
+}
+
+distances[cellid].cellid = cellid;
+distances[cellid].value = value;
+
+return distances[cellid].value;
+}
+
+
+size_t

[libvirt] [PATCH v6 4/4] xlconfigtest: add tests for numa cell sibling distances

2017-11-02 Thread Wim Ten Have
From: Wim ten Have 

Test a bidirectional xen-xl domxml to and from native for vnuma
support administration as brought under this patch series.

Added tests for the libxl_domain_config generator determining
vnuma conversion for XML-2-json and json-2-XML.

Signed-off-by: Wim ten Have 
---
Changes on v5:
- Added tests for libxl_domain_config generator.
- Introduced empty stubs for xc_physinfo(), xc_sharing_freed_pages(),
  xc_sharing_used_frames to satisfy libxl_get_physinfo() under test
  simulation for libxlxml2domconfigtest.c
---
 tests/libxlxml2domconfigdata/basic-hvm.json| 95 +-
 tests/libxlxml2domconfigdata/basic-hvm.xml | 66 ++-
 tests/virmocklibxl.c   | 13 +++
 .../test-fullvirt-vnuma-autocomplete.cfg   | 26 ++
 .../test-fullvirt-vnuma-autocomplete.xml   | 85 +++
 .../test-fullvirt-vnuma-nodistances.cfg| 26 ++
 .../test-fullvirt-vnuma-nodistances.xml| 53 
 .../test-fullvirt-vnuma-partialdist.cfg| 26 ++
 .../test-fullvirt-vnuma-partialdist.xml| 60 ++
 tests/xlconfigdata/test-fullvirt-vnuma.cfg | 26 ++
 tests/xlconfigdata/test-fullvirt-vnuma.xml | 81 ++
 tests/xlconfigtest.c   |  6 ++
 12 files changed, 559 insertions(+), 4 deletions(-)
 create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-autocomplete.cfg
 create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-autocomplete.xml
 create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-nodistances.cfg
 create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-nodistances.xml
 create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-partialdist.cfg
 create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-partialdist.xml
 create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma.cfg
 create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma.xml

diff --git a/tests/libxlxml2domconfigdata/basic-hvm.json 
b/tests/libxlxml2domconfigdata/basic-hvm.json
index 6fa41f34f..3a5071e14 100644
--- a/tests/libxlxml2domconfigdata/basic-hvm.json
+++ b/tests/libxlxml2domconfigdata/basic-hvm.json
@@ -5,17 +5,105 @@
 "uuid": "2147d599-9cc6-c0dc-92ab-4064b5446e9b"
 },
 "b_info": {
-"max_vcpus": 4,
+"max_vcpus": 6,
 "avail_vcpus": [
 0,
 1,
 2,
-3
+3,
+4,
+5
+],
+"vnuma_nodes": [
+{
+"memkb": 2097152,
+"distances": [
+10,
+21,
+31,
+41,
+51,
+61
+],
+"vcpus": [
+0
+]
+},
+{
+"memkb": 2097152,
+"distances": [
+21,
+10,
+21,
+31,
+41,
+51
+],
+"vcpus": [
+1
+]
+},
+{
+"memkb": 2097152,
+"distances": [
+31,
+21,
+10,
+21,
+31,
+41
+],
+"vcpus": [
+2
+]
+},
+{
+"memkb": 2097152,
+"distances": [
+41,
+31,
+21,
+10,
+21,
+31
+],
+"vcpus": [
+3
+]
+},
+{
+"memkb": 2097152,
+"distances": [
+51,
+41,
+31,
+21,
+10,
+21
+],
+"vcpus": [
+4
+]
+},
+{
+"memkb": 2097152,
+"distances": [
+61,
+51,
+41,
+31,
+21,
+10
+],
+"vcpus": [
+5
+]
+}
 ],
 "max_memkb": 1048576,
 "target_memkb": 1048576,
 "video_memkb": 8192,
-"shadow_memkb": 12288,
+"shadow_memkb": 14336,
 "device_model_version": "qemu_xen",
 "device_model": "/bin/true",
 "sched_params": {
@@ -25,6 +113,7 @@
 "pae": "True",
 "apic": "True",
 

Re: [libvirt] storage lock issues

2017-11-02 Thread Cedric Bosdonnat
On Thu, 2017-11-02 at 07:03 -0400, John Ferlan wrote:
> So how to fix - well seems to me the storageDriverLock in VolCreateXML
> may be the way since the refresh thread takes the driver lock first,
> then the pool object second.  Perhaps even like the build code where it
> takes it temporarily while getting the pool object. I'd have to think a
> bit more about though. Still might be something to try since the Vol
> Refresh thread takes it while running...

This is surely a bad hack, but this is fixing the problem I'm seeing.
Wouldn't the VolCreateXML function taking the lock for a too long time
and thus lead to other troubles?

 %< 
diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c
index 18d630319..f5a1e7bc2 100644
--- a/src/storage/storage_driver.c
+++ b/src/storage/storage_driver.c
@@ -1825,6 +1825,7 @@ storageVolCreateXML(virStoragePoolPtr obj,
 if (backend->createVol(obj->conn, pool, voldef) < 0)
 goto cleanup;
 
+storageDriverLock();
 pool->volumes.objs[pool->volumes.count++] = voldef;
 volobj = virGetStorageVol(obj->conn, pool->def->name, voldef->name,
   voldef->key, NULL, NULL);
@@ -1858,9 +1859,7 @@ storageVolCreateXML(virStoragePoolPtr obj,
 
 VIR_FREE(buildvoldef);
 
-storageDriverLock();
 virStoragePoolObjLock(pool);
-storageDriverUnlock();
 
 voldef->building = false;
 pool->asyncjobs--;
@@ -1897,6 +1896,7 @@ storageVolCreateXML(virStoragePoolPtr obj,
 voldef = NULL;
 
  cleanup:
+storageDriverUnlock();
 virObjectUnref(volobj);
 virStorageVolDefFree(voldef);
 if (pool)
 %< 

--
Cedric

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [jenkins-ci PATCH 2/2] jobs: use job_env in all job templates

2017-11-02 Thread Daniel P. Berrange
On Thu, Nov 02, 2017 at 01:45:51PM +0100, Pavel Hrdina wrote:
> This fixes libosinfo RPM build, we need to set "OSINFO_SYSTEM_DIR" for
> RPM build as well since the spec file runs tests and they need valid
> osinfo-db.  Broken by <359b069f7e4700bee337588eab0a9d83cac94010>.

I'm not suggesting you fix this right now, but our current RPM build
process is very broken by design, because we are completely ignoring
dependancies between packages. We've hacked around this by culling the
BuildRequires lines from the spec file before running rpmbuild, and
relying on a bit of luck to have the rpmbuild find the stuff we just
built into  $VIRT_PREFIX. As this test suite problem shows though,
this is very fragile.

One way to fix this is to create a yum repo populated with all RPMs
we build during CI. Then setup a mock vroot to do the builds in,
which includes this local yum repo. That way we can honour the RPM
deps correctly getting a more accurate test of RPM build process.

My main concern with this is that mock is kind of slow to bootstrap
the initial vroot, so this would slow our CI somewhat.

> 
> Signed-off-by: Pavel Hrdina 
> ---
>  jobs/autotools.yaml  | 3 +++
>  jobs/perl-makemaker.yaml | 3 +++
>  2 files changed, 6 insertions(+)
> 
> diff --git a/jobs/autotools.yaml b/jobs/autotools.yaml
> index 9ed5efe..e41d5ab 100644
> --- a/jobs/autotools.yaml
> +++ b/jobs/autotools.yaml
> @@ -41,6 +41,7 @@
>  builders:
>- shell: |
>{make_env}
> +  {job_env}
>mkdir build
>cd build
>../autogen.sh --prefix=$VIRT_PREFIX {autogen_args}
> @@ -82,6 +83,7 @@
>  builders:
>- shell: |
>{make_env}
> +  {job_env}
>cd build
>$MAKE -j{smp} syntax-check
>  publishers:
> @@ -164,6 +166,7 @@
>  builders:
>- shell: |
>{make_env}
> +  {job_env}
>cd build
>sed -i -e 's/BuildRequires: osinfo-db.*//' {name}.spec
>sed -i -e 's/BuildRequires: libvirt.*devel.*//' {name}.spec
> diff --git a/jobs/perl-makemaker.yaml b/jobs/perl-makemaker.yaml
> index 0a3227a..3e53a29 100644
> --- a/jobs/perl-makemaker.yaml
> +++ b/jobs/perl-makemaker.yaml
> @@ -42,6 +42,7 @@
>- shell: |
>perl Makefile.PL PREFIX="$VIRT_PREFIX"
>{make_env}
> +  {job_env}
>$MAKE
>$MAKE -j{smp} install
>$MAKE -j{smp} manifest
> @@ -83,6 +84,7 @@
>  builders:
>- shell: |
>{make_env}
> +  {job_env}
>$MAKE -j{smp} test {test_args}
>  publishers:
>- email:
> @@ -121,6 +123,7 @@
>  builders:
>- shell: |
>{make_env}
> +  {job_env}
>sed -i -e 's/BuildRequires: *perl-Sys-Virt.*//' *.spec
>sed -i -e 's/BuildRequires: *libvirt.*devel.*//' *.spec
>rm -f *.tar.{archive_format}
> -- 
> 2.13.6
> 
> --
> libvir-list mailing list
> libvir-list@redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] How to best handle the reoccurring of rom changes breaking cross version migrations?

2017-11-02 Thread Daniel P. Berrange
On Thu, Nov 02, 2017 at 04:14:06PM +0100, Christian Ehrhardt wrote:
> Ping - since there wasn't any reply so far - any best practices one could
> share?
> 
> Let me add a TL;DR:
> - bump of ipxe rom versions change the size of virtio-net-pci.rom
> - that breaks on migration "Length mismatch"
> 
> I'd guess the size of that rom has to be fixed up on the fly, but if that
> is really ok and how/where is the question.

The actual ROM contents will be transferred in the migration stream, so
the fact that the target host has ROMs with different content is not
important. The key thing that matters is that QEMU the target host loads
the ROMs at the same location, so that when the ROM contents is overwritten
with data from the incoming migration scheme, it all ends up at the same
place as it was on the source.

Getting this to happen requires pre-planning when building the ROMs. By
the time you hit the size change during migration it is too late and your
VM is basically doomed. When building you need to add padding. IIUC for
RHEL we artificially increased the size of the seabios and ipxe ROMs to
256k. So when later RHEL updates ship new seabios/ipxe they still fit in
the 256k region previously allowed for.


... QEMU could really benefit from more formal docs around migration to
describe how users / vendors can protect themselves from the many pitfalls
like this...


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] networking problem

2017-11-02 Thread msliu
Hi Daniel,# ls /usr/share/libvirt/cpu_map.xml  libvirtLogo.png  schemasNo "network" folder found# locate default.xml/etc/libvirt/storage/default.xml/etc/libvirt/storage/autostart/default.xml/home/satimis/.config/libvirt/storage/default.xml/home/satimis/.config/libvirt/storage/autostart/default.xml/usr/share/gutenprint/5.2/xml/escp2/inputslots/default.xmlBesides# ls /etc/libvirt/qemu/networks/autostart  NAT.xml# ls /etc/libvirt/qemu/networks/autostart/NAT.xmlThere are 2 NAT.xml file of same content# cat /etc/libvirt/qemu/networks/NAT.xml   NAT  2b550fe1-cff0-476e-aabe-f8a239231315                    # cat /etc/libvirt/qemu/networks/autostart/NAT.xml   NAT  2b550fe1-cff0-476e-aabe-f8a239231315                    RegardsSL


 Original Message 
Subject: Re: [libvirt] networking problem
From: "Daniel P. Berrange" 
Date: Thu, November 02, 2017 9:31 am
To: Stefan Hajnoczi 
Cc: ms...@reynoldstocks.com, libvir-list@redhat.com, k...@vger.kernel.org

On Thu, Nov 02, 2017 at 09:24:22AM +, Stefan Hajnoczi wrote:
> On Sun, Oct 29, 2017 at 04:07:09AM -0700, ms...@reynoldstocks.com wrote:
> > I have performed following steps:
> > 
> > $ virsh net-destroy default
> > $ virsh net-undefine default
> > 
> > Now I couldn't start guest with following warning popup:
> > Error starting domain: Network not found: no network with matching name
> > 'default'
> > 
> > Please advise how to re-create 'default' instead of reinstalling all
> > guests.
> 
> This is a libvirt question.  I have CCed the libvirt mailing list so you
> can continue discussion there.
> 
> On Red Hat-based distros try reinstalling the
> libvirt-daemon-config-network package.  On Debian-based distros try
> reinstalling the libvirt-daemon-system package.  This should restore the
> /etc/libvirt/qemu/networks/default.xml file that you are missing.

No need to re-install those RPMs - Libvirt leaves a copy of the stock XML
file at /usr/share/libvirt/networks/default.xml.  So just run

  virsh define /usr/share/libvirt/networks/default.xml
  virsh start default


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [jenkins-ci PATCH 0/2] fix libosinfo rpm build

2017-11-02 Thread Andrea Bolognani
On Thu, 2017-11-02 at 13:45 +0100, Pavel Hrdina wrote:
> Pavel Hrdina (2):
>   jobs: rename check_env into job_env
>   jobs: use job_env in all job templates
> 
>  jobs/autotools.yaml  | 5 -
>  jobs/defaults.yaml   | 2 +-
>  jobs/perl-makemaker.yaml | 3 +++
>  projects/libosinfo.yaml  | 2 +-
>  projects/libvirt.yaml| 2 +-
>  5 files changed, 10 insertions(+), 4 deletions(-)

I'm not sure this approach is the best one.

I like the idea of having a place where we can stick common
environment variables - for us that mostly means paths, and we
should definitely move guest configuration out of its definition
in Jenkins and into this repository, as we prototyped yesterday.

But here you're still keeping that info local. Moreover, you're
moving the VIR_TEST_* variables from check_env to job_env, which
means they end up being defined even during regular 'make'. I seem
to recall Dan speaking up against that earlier.

I think we should have:

  global_env - as the name implies, global; for $VIRT_PREFIX and all
   other paths that affect one or more build jobs, like
   $OSINFO_SYSTEM_DIR and $MAKE
  build_env  - local; for variables that affect the build step of a
   single job (can't think of any right now)
  test_env   - local; for variables that affect the test suite of a
   single job, like $VIR_TEST_DEBUG

For projects that inherit from anything but generic-*-job,
global_env will be included automatically; projects that don't use
any of the known build system will have to take care of that
themselves, but then again they already have to do that for make_env.

Moreover, we should be able to convert libvirt-cim to use
autotools-*-job by tweaking its autogen.sh script, which would leave
osinfo-db as the only user of generic-*-job.

-- 
Andrea Bolognani / Red Hat / Virtualization

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] How to best handle the reoccurring of rom changes breaking cross version migrations?

2017-11-02 Thread Christian Ehrhardt
Ping - since there wasn't any reply so far - any best practices one could
share?

Let me add a TL;DR:
- bump of ipxe rom versions change the size of virtio-net-pci.rom
- that breaks on migration "Length mismatch"

I'd guess the size of that rom has to be fixed up on the fly, but if that
is really ok and how/where is the question.

Also to +1 on bad things for today - I made this a cross post to libvirt in
case there is one that has done that in the past.


On Mon, Aug 28, 2017 at 4:36 PM, Christian Ehrhardt <
christian.ehrha...@canonical.com> wrote:

> Hi,
> migration issues due to rom changes seem to occur over and over in past
> years [1], [2],[3],[4],[5].
> From the past I know several workarounds (like just truncating to the
> bigger size) but all have various deficiencies.
>
> But OTOH rom's will always change due to fixes in them. And recently I
> found one such change [6] that affects the next Ubuntu release and wonder
> what the ?right?, well lets say best way to fix it would be.
> Current issue:
> Length mismatch: :00:03.0/virtio-net-pci.rom: 0x4 in != 0x8:
> Invalid argument
> Due to efi-virtio.rom growing over 256k due to an update to a newer ipxe
> from upstream.
>
> I beg your pardon (for not being educated enough on this yet), but I want
> to avoid to go a route that is fixing it in a sub-optimal way and ask for
> some guidance. There might be better ways in the modern qemu of today than
> there were in the past.
> TL;DR I look for the best way (if any) to declare in the new qemu: "I know
> that the old size was smaller, let me fix that up on migration".
>
> I'll try on my own as I expect the machine type structures/mechanisms (and
> we have Ubuntu specific types that could encapsulate the rom status from
> the ipxe package to be coupled with the type) have all that is needed.
> Yet me almost randomly trying around there surely isn't the best way to go
> - so if there is some existing case or a short hint at what in there might
> be the best way to fixup a changed rom size on a migration I'd be very
> happy to hear about that.
>
> [1]: https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1536331
> [2]: https://lists.gnu.org/archive/html/qemu-devel/2016-01/msg01881.html
> [3]: https://bugzilla.redhat.com/show_bug.cgi?id=1293566
> [4]: https://bugzilla.redhat.com/show_bug.cgi?id=1090093
> [5]: https://forge.univention.org/bugzilla/show_bug.cgi?id=38877
> [6]: https://bugs.launchpad.net/ubuntu/+source/ipxe/+bug/1713490
>
> P.S: As everybody else I don't mind so much on reverse migration to older
> releases
>
> --
> Christian Ehrhardt
> Software Engineer, Ubuntu Server
> Canonical Ltd
>



-- 
Christian Ehrhardt
Software Engineer, Ubuntu Server
Canonical Ltd
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] storage lock issues

2017-11-02 Thread Cedric Bosdonnat
Hi John,

On Thu, 2017-11-02 at 07:03 -0400, John Ferlan wrote:
> 
> On 11/02/2017 06:20 AM, Cedric Bosdonnat wrote:
> > Hi all,
> > 
> > I'm investigating a crash in the storage driver, and it seems the solution
> > is not as obvious as I wanted it.
> > 
> > Basically what happens is that libvirtd crashes due to the pool refresh 
> > thread
> > clearing the list of pool volumes while someone is adding a new volume. Here
> > is the valgrind log I have for that:
> > 
> > http://paste.opensuse.org/58733866
> > 
> 
> What version of libvirt?  Your line numbers don't match up with current
> HEAD.

It's a 3.8.0.

> > I tried adding a call to virStoragePoolObjLock() in the
> > virStorageVolPoolRefreshThread(), since the storageVolCreateXML()
> > appears to have it, but I'm getting a dead lock. Here are the waiting
> > threads I got from gdb:
> 
> virStoragePoolObjFindByName returns a locked pool obj, so not sure
> where/why you added this...

Oh I overlooked that... then there is already a lock on it, the solution
is elsewhere

> > 
> > http://paste.opensuse.org/45961070
> > 
> > Any idea what would be a proper fix for that? My vision of the storage
> > drivers locks is too limited it seems ;)
> 
> From just a quick look...
> 
> The virStorageVolPoolRefreshThread will take storageDriverLock, get a
> locked pool object (virStoragePoolObjFindByName), clear out the pool,
> add back volumes that it knows about, unlock the pool object, and call
> storageDriverUnlock.
> 
> If you were adding a new pool... You'd take storageDriverLock, then
> eventually virStoragePoolObjAssignDef would be called and the pool's
> object lock taken and unlocked, and returns a locked pool object which
> gets later unlocked in cleanup, followd by a storageDriverUnlock.
> 
> If you're adding a new volume object, you'd get a locked pool object via
> virStoragePoolObjFromStoragePool, then if building, that lock gets
> dropped after increasing the async job count and setting the building
> flag, the volume is built, then the object lock retaken while
> temporarily holding the storageDriverLock, the async job count gets
> decremented and the building flag cleared, eventually we fall into
> cleanup with unlocks the pool again.

Thanks for the details: this is really useful to globally understand how
locks are working in the storage driver. Maybe worth turning it into a doc
somewhere?

> So how to fix - well seems to me the storageDriverLock in VolCreateXML
> may be the way since the refresh thread takes the driver lock first,
> then the pool object second.  Perhaps even like the build code where it
> takes it temporarily while getting the pool object. I'd have to think a
> bit more about though. Still might be something to try since the Vol
> Refresh thread takes it while running...

Ok, I'll look into that direction.

> John
> 
> Not related to this problem per se, but what may help even more is if I
> can get the storage driver usage of a common object model patches
> completely reviewed, but that's a different problem ;-)... I'll have to
> go look and see if I may have fixed this there. The newer model uses
> hash tables, RW locks, and reduces the storage driver hammer lock, but
> this one condition may not have been addressed.

I can have a go at it and see if I can reproduce the crash with it. Not sure
I'll be the one skilled enough for the full review though ;)

--
Cedric

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [jenkins-ci PATCH 2/2] jobs: use job_env in all job templates

2017-11-02 Thread Pavel Hrdina
This fixes libosinfo RPM build, we need to set "OSINFO_SYSTEM_DIR" for
RPM build as well since the spec file runs tests and they need valid
osinfo-db.  Broken by <359b069f7e4700bee337588eab0a9d83cac94010>.

Signed-off-by: Pavel Hrdina 
---
 jobs/autotools.yaml  | 3 +++
 jobs/perl-makemaker.yaml | 3 +++
 2 files changed, 6 insertions(+)

diff --git a/jobs/autotools.yaml b/jobs/autotools.yaml
index 9ed5efe..e41d5ab 100644
--- a/jobs/autotools.yaml
+++ b/jobs/autotools.yaml
@@ -41,6 +41,7 @@
 builders:
   - shell: |
   {make_env}
+  {job_env}
   mkdir build
   cd build
   ../autogen.sh --prefix=$VIRT_PREFIX {autogen_args}
@@ -82,6 +83,7 @@
 builders:
   - shell: |
   {make_env}
+  {job_env}
   cd build
   $MAKE -j{smp} syntax-check
 publishers:
@@ -164,6 +166,7 @@
 builders:
   - shell: |
   {make_env}
+  {job_env}
   cd build
   sed -i -e 's/BuildRequires: osinfo-db.*//' {name}.spec
   sed -i -e 's/BuildRequires: libvirt.*devel.*//' {name}.spec
diff --git a/jobs/perl-makemaker.yaml b/jobs/perl-makemaker.yaml
index 0a3227a..3e53a29 100644
--- a/jobs/perl-makemaker.yaml
+++ b/jobs/perl-makemaker.yaml
@@ -42,6 +42,7 @@
   - shell: |
   perl Makefile.PL PREFIX="$VIRT_PREFIX"
   {make_env}
+  {job_env}
   $MAKE
   $MAKE -j{smp} install
   $MAKE -j{smp} manifest
@@ -83,6 +84,7 @@
 builders:
   - shell: |
   {make_env}
+  {job_env}
   $MAKE -j{smp} test {test_args}
 publishers:
   - email:
@@ -121,6 +123,7 @@
 builders:
   - shell: |
   {make_env}
+  {job_env}
   sed -i -e 's/BuildRequires: *perl-Sys-Virt.*//' *.spec
   sed -i -e 's/BuildRequires: *libvirt.*devel.*//' *.spec
   rm -f *.tar.{archive_format}
-- 
2.13.6

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [jenkins-ci PATCH 1/2] jobs: rename check_env into job_env

2017-11-02 Thread Pavel Hrdina
The check_env was used only for "make check" but there might be some
environment variables that needs to be configured for more than one
job of the same project.

Signed-off-by: Pavel Hrdina 
---
 jobs/autotools.yaml | 2 +-
 jobs/defaults.yaml  | 2 +-
 projects/libosinfo.yaml | 2 +-
 projects/libvirt.yaml   | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/jobs/autotools.yaml b/jobs/autotools.yaml
index 19f62be..9ed5efe 100644
--- a/jobs/autotools.yaml
+++ b/jobs/autotools.yaml
@@ -120,7 +120,7 @@
 builders:
   - shell: |
   {make_env}
-  {check_env}
+  {job_env}
   cd build
   if ! $MAKE -j{smp} check
   then
diff --git a/jobs/defaults.yaml b/jobs/defaults.yaml
index 0e628bb..43face2 100644
--- a/jobs/defaults.yaml
+++ b/jobs/defaults.yaml
@@ -9,6 +9,6 @@
   if [ "$unamestr" = 'FreeBSD' ]; then
   MAKE='gmake'
   fi
-check_env: |
+job_env: |
 smp: 3
 spam: yman...@redhat.com libvirt...@redhat.com
diff --git a/projects/libosinfo.yaml b/projects/libosinfo.yaml
index 9ab2281..a02246f 100644
--- a/projects/libosinfo.yaml
+++ b/projects/libosinfo.yaml
@@ -11,7 +11,7 @@
   - libvirt-freebsd-10
   - libvirt-freebsd-11
 title: libosinfo
-check_env: |
+job_env: |
   export OSINFO_SYSTEM_DIR=$VIRT_PREFIX/share/osinfo
 jobs:
   - autotools-build-job:
diff --git a/projects/libvirt.yaml b/projects/libvirt.yaml
index 5661066..459aa17 100644
--- a/projects/libvirt.yaml
+++ b/projects/libvirt.yaml
@@ -28,7 +28,7 @@
   parent_jobs: 'libvirt-master-build'
   - autotools-check-job:
   parent_jobs: 'libvirt-master-syntax-check'
-  check_env: |
+  job_env: |
 export VIR_TEST_EXPENSIVE=1
 export VIR_TEST_DEBUG=2
   - autotools-rpm-job:
-- 
2.13.6

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [jenkins-ci PATCH 0/2] fix libosinfo rpm build

2017-11-02 Thread Pavel Hrdina
Pavel Hrdina (2):
  jobs: rename check_env into job_env
  jobs: use job_env in all job templates

 jobs/autotools.yaml  | 5 -
 jobs/defaults.yaml   | 2 +-
 jobs/perl-makemaker.yaml | 3 +++
 projects/libosinfo.yaml  | 2 +-
 projects/libvirt.yaml| 2 +-
 5 files changed, 10 insertions(+), 4 deletions(-)

-- 
2.13.6

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] storage lock issues

2017-11-02 Thread John Ferlan


On 11/02/2017 06:20 AM, Cedric Bosdonnat wrote:
> Hi all,
> 
> I'm investigating a crash in the storage driver, and it seems the solution
> is not as obvious as I wanted it.
> 
> Basically what happens is that libvirtd crashes due to the pool refresh thread
> clearing the list of pool volumes while someone is adding a new volume. Here
> is the valgrind log I have for that:
> 
> http://paste.opensuse.org/58733866
> 

What version of libvirt?  Your line numbers don't match up with current
HEAD.

> I tried adding a call to virStoragePoolObjLock() in the
> virStorageVolPoolRefreshThread(), since the storageVolCreateXML()
> appears to have it, but I'm getting a dead lock. Here are the waiting
> threads I got from gdb:

virStoragePoolObjFindByName returns a locked pool obj, so not sure
where/why you added this...

> 
> http://paste.opensuse.org/45961070
> 
> Any idea what would be a proper fix for that? My vision of the storage
> drivers locks is too limited it seems ;)

>From just a quick look...

The virStorageVolPoolRefreshThread will take storageDriverLock, get a
locked pool object (virStoragePoolObjFindByName), clear out the pool,
add back volumes that it knows about, unlock the pool object, and call
storageDriverUnlock.

If you were adding a new pool... You'd take storageDriverLock, then
eventually virStoragePoolObjAssignDef would be called and the pool's
object lock taken and unlocked, and returns a locked pool object which
gets later unlocked in cleanup, followd by a storageDriverUnlock.

If you're adding a new volume object, you'd get a locked pool object via
virStoragePoolObjFromStoragePool, then if building, that lock gets
dropped after increasing the async job count and setting the building
flag, the volume is built, then the object lock retaken while
temporarily holding the storageDriverLock, the async job count gets
decremented and the building flag cleared, eventually we fall into
cleanup with unlocks the pool again.

So how to fix - well seems to me the storageDriverLock in VolCreateXML
may be the way since the refresh thread takes the driver lock first,
then the pool object second.  Perhaps even like the build code where it
takes it temporarily while getting the pool object. I'd have to think a
bit more about though. Still might be something to try since the Vol
Refresh thread takes it while running...

John

Not related to this problem per se, but what may help even more is if I
can get the storage driver usage of a common object model patches
completely reviewed, but that's a different problem ;-)... I'll have to
go look and see if I may have fixed this there. The newer model uses
hash tables, RW locks, and reduces the storage driver hammer lock, but
this one condition may not have been addressed.

> 
> --
> Cedric
> 
> --
> libvir-list mailing list
> libvir-list@redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list
> 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] storage lock issues

2017-11-02 Thread Cedric Bosdonnat
Hi all,

I'm investigating a crash in the storage driver, and it seems the solution
is not as obvious as I wanted it.

Basically what happens is that libvirtd crashes due to the pool refresh thread
clearing the list of pool volumes while someone is adding a new volume. Here
is the valgrind log I have for that:

http://paste.opensuse.org/58733866

I tried adding a call to virStoragePoolObjLock() in the
virStorageVolPoolRefreshThread(), since the storageVolCreateXML()
appears to have it, but I'm getting a dead lock. Here are the waiting
threads I got from gdb:

http://paste.opensuse.org/45961070

Any idea what would be a proper fix for that? My vision of the storage
drivers locks is too limited it seems ;)

--
Cedric

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] news: Update for 3.9.0 release

2017-11-02 Thread John Ferlan


>>> +  constraints that log have to be bigger than 100 KiB before they 
>>> can
>>> +  be rotated solves the issue.
>>
>> s/issue.$/issue. However, this may increase the number of files until
>> they are automatically rotated.
> 
> I don't think that's true: the same number of log files will be
> created, it's just that now more files will be rotated. So I left
> out that part.
> 

I was just reading the commit '6c43149c4':

"Dropping 'minsize 100k' allows rotating small files, which will
increase the number of log files, but 'rotate 4' ensures they will
be removed after a month.
"

and trying to extrapolate. I'm fine with dropping it though. We really
need to get better at making the author of a series do this stuff!

John

>> (Personally, not quite sure how that rotation actually occurs).
> 
> Not sure myself. I think the logrotate profile is installed along
> with libvirt, but you have to enable it explicitly for rotation to
> actually occur?
> 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] news: Update for 3.9.0 release

2017-11-02 Thread Andrea Bolognani
On Wed, 2017-11-01 at 16:19 -0400, John Ferlan wrote:
> > +
> > +
> > +  This new API, also exposed through the
> > +  set-lifecycle-action virsh command, 
> > allows
> > +  the user to dynamically control how the guest will react to being
> > +  powered off, being restarted or crashing.
> 
> This one reads strangely to me...  As a suggestion
> 
> Provided a new API to allow dynamic guest lifecycle control for guest
> reactions to poweroff, restart, or crash type events related to the
> domain XML on_poweroff, on_reboot, and
> on_crash elements. The virsh
> set-lifecycle-action command was created to control the actions.

You forgot to close the  element here ;)

> > +  constraints that log have to be bigger than 100 KiB before they 
> > can
> > +  be rotated solves the issue.
> 
> s/issue.$/issue. However, this may increase the number of files until
> they are automatically rotated.

I don't think that's true: the same number of log files will be
created, it's just that now more files will be rotated. So I left
out that part.

> (Personally, not quite sure how that rotation actually occurs).

Not sure myself. I think the logrotate profile is installed along
with libvirt, but you have to enable it explicitly for rotation to
actually occur?

> > +  
> > +
> > +  qemu: Ensure TLS clients always verify the server certificate
> > +
> > +
> > +  While it's reasonable to turn off client certificate validation,
> > +  as setting it up can be non-trivial, clients should always verify
> > +  the server certificate to avoid MITM attacks. libvirt was, 
> > however,
> > +  using the same knob to control both checks, leading to
> > +  CVE-2017-1000256 / LSN-2017-0002.
> > +
> > +  

As suggested by Peter, I've moved this to a separate "Security"
section, and pushed the whole thing.

Thanks for the review and all the improvements :)

-- 
Andrea Bolognani / Red Hat / Virtualization

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] cputest: Skip tests requiring JSON_MODELS if QEMU is disabled

2017-11-02 Thread Jiri Denemark
On Thu, Nov 02, 2017 at 10:38:52 +0100, Pavel Hrdina wrote:
> On Wed, Nov 01, 2017 at 06:57:42PM +0100, Jiri Denemark wrote:
> > Some tests require JSON_MODELS to be parsed into qemuCaps and applied
> > when computing CPU models and such test cannot succeed if QEMU driver is
> > disabled. Let's mark the tests with JSON_MODELS_REQUIRED and skip the
> > appropriate parts if building without QEMU.
> > 
> > On the other hand, CPU tests with JSON_MODELS should succeed even if
> > model definitions from QEMU are not parsed and applied. Let's explicitly
> > test this by repeating the tests without JSON_MODELS set.
> > 
> > This fixes the build with QEMU driver disabled, e.g., on some
> > architectures on RHEL/CentOS.
> > 
> > Signed-off-by: Jiri Denemark 
> > ---
> >  tests/cputest.c | 70 
> > -
> >  1 file changed, 49 insertions(+), 21 deletions(-)
> >
> > diff --git a/tests/cputest.c b/tests/cputest.c
> > index 5d1fe7d99..8e94c2152 100644
> > --- a/tests/cputest.c
> > +++ b/tests/cputest.c
> > @@ -464,6 +464,7 @@ typedef enum {
> >  JSON_NONE,
> >  JSON_HOST,
> >  JSON_MODELS,
> > +JSON_MODELS_REQUIRED,
> >  } cpuTestCPUIDJson;
> 
> After discussion on IRC please describe when JSON_MODELS_REQUIRED should
> be used, preferably in the code, commit message is good as well but if
> the code gets moved, it's easier to have it in the code.

Done and pushed, thanks.

Jirka

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] cputest: Skip tests requiring JSON_MODELS if QEMU is disabled

2017-11-02 Thread Pavel Hrdina
On Wed, Nov 01, 2017 at 06:57:42PM +0100, Jiri Denemark wrote:
> Some tests require JSON_MODELS to be parsed into qemuCaps and applied
> when computing CPU models and such test cannot succeed if QEMU driver is
> disabled. Let's mark the tests with JSON_MODELS_REQUIRED and skip the
> appropriate parts if building without QEMU.
> 
> On the other hand, CPU tests with JSON_MODELS should succeed even if
> model definitions from QEMU are not parsed and applied. Let's explicitly
> test this by repeating the tests without JSON_MODELS set.
> 
> This fixes the build with QEMU driver disabled, e.g., on some
> architectures on RHEL/CentOS.
> 
> Signed-off-by: Jiri Denemark 
> ---
>  tests/cputest.c | 70 
> -
>  1 file changed, 49 insertions(+), 21 deletions(-)
>
> diff --git a/tests/cputest.c b/tests/cputest.c
> index 5d1fe7d99..8e94c2152 100644
> --- a/tests/cputest.c
> +++ b/tests/cputest.c
> @@ -464,6 +464,7 @@ typedef enum {
>  JSON_NONE,
>  JSON_HOST,
>  JSON_MODELS,
> +JSON_MODELS_REQUIRED,
>  } cpuTestCPUIDJson;

After discussion on IRC please describe when JSON_MODELS_REQUIRED should
be used, preferably in the code, commit message is good as well but if
the code gets moved, it's easier to have it in the code.

Since this fixes only tests I would say SFF.

Reviewed-by: Pavel Hrdina 


signature.asc
Description: PGP signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] networking problem

2017-11-02 Thread Daniel P. Berrange
On Thu, Nov 02, 2017 at 09:24:22AM +, Stefan Hajnoczi wrote:
> On Sun, Oct 29, 2017 at 04:07:09AM -0700, ms...@reynoldstocks.com wrote:
> > I have performed following steps:
> > 
> > $ virsh net-destroy default
> > $ virsh net-undefine default
> > 
> > Now I couldn't start guest with following warning popup:
> > Error starting domain: Network not found: no network with matching name
> > 'default'
> > 
> > Please advise how to re-create 'default' instead of reinstalling all
> > guests.
> 
> This is a libvirt question.  I have CCed the libvirt mailing list so you
> can continue discussion there.
> 
> On Red Hat-based distros try reinstalling the
> libvirt-daemon-config-network package.  On Debian-based distros try
> reinstalling the libvirt-daemon-system package.  This should restore the
> /etc/libvirt/qemu/networks/default.xml file that you are missing.

No need to re-install those RPMs - Libvirt leaves a copy of the stock XML
file at /usr/share/libvirt/networks/default.xml.  So just run

  virsh define /usr/share/libvirt/networks/default.xml
  virsh start default


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] networking problem

2017-11-02 Thread Stefan Hajnoczi
On Sun, Oct 29, 2017 at 04:07:09AM -0700, ms...@reynoldstocks.com wrote:
> I have performed following steps:
> 
> $ virsh net-destroy default
> $ virsh net-undefine default
> 
> Now I couldn't start guest with following warning popup:
> Error starting domain: Network not found: no network with matching name
> 'default'
> 
> Please advise how to re-create 'default' instead of reinstalling all
> guests.

This is a libvirt question.  I have CCed the libvirt mailing list so you
can continue discussion there.

On Red Hat-based distros try reinstalling the
libvirt-daemon-config-network package.  On Debian-based distros try
reinstalling the libvirt-daemon-system package.  This should restore the
/etc/libvirt/qemu/networks/default.xml file that you are missing.

Stefan


signature.asc
Description: PGP signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH v2 3/4] qemu: Use predictable file names for memory-backend-file

2017-11-02 Thread Michal Privoznik
On 11/02/2017 12:15 AM, John Ferlan wrote:
> 
> 
> On 10/24/2017 07:41 AM, Michal Privoznik wrote:
>> In some cases management application needs to allocate memory for
>> qemu upfront and then just let qemu use that. Since we don't want
>> to expose path for memory-backend-file anywhere in the domain
>> XML, we can generate predictable paths. In this case:
>>
>>   $memoryBackingDir/libvirt/qemu/$shortName/$alias
>>
>> where $shortName is result of virDomainObjGetShortName().
> 
> s/Obj/Def
> 
>>
>> Signed-off-by: Michal Privoznik 
>> ---
>>  src/qemu/qemu_command.c|   9 +-
>>  src/qemu/qemu_conf.c   |  55 -
>>  src/qemu/qemu_conf.h   |   9 +-
>>  src/qemu/qemu_driver.c |  17 +++
>>  src/qemu/qemu_hotplug.c|   2 +-
>>  src/qemu/qemu_process.c| 137 
>> +++--
>>  src/qemu/qemu_process.h|   8 +-
>>  .../qemuxml2argv-cpu-numa-memshared.args   |   6 +-
>>  .../qemuxml2argv-fd-memory-numa-topology.args  |   3 +-
>>  .../qemuxml2argv-fd-memory-numa-topology2.args |   6 +-
>>  .../qemuxml2argv-fd-memory-numa-topology3.args |   9 +-
>>  .../qemuxml2argv-hugepages-memaccess2.args |   9 +-
>>  12 files changed, 207 insertions(+), 63 deletions(-)
>>
> 
> There's parts of this that would seemingly be able to be separated out
> so that it's easier to review...
> 
>> diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
>> index 82d2fb2a3..d6d2654cd 100644
>> --- a/src/qemu/qemu_command.c
>> +++ b/src/qemu/qemu_command.c
>> @@ -3439,7 +3439,7 @@ qemuBuildMemoryBackendStr(virJSONValuePtr 
>> *backendProps,
>>  } else {
>>  /* We can have both pagesize and mem source. If that's the case,
>>   * prefer hugepages as those are more specific. */
>> -if (qemuGetMemoryBackingPath(cfg, ) < 0)
>> +if (qemuGetMemoryBackingPath(def, cfg, mem->info.alias, 
>> ) < 0)
>>  goto cleanup;
>>  }
>>  
>> @@ -3542,12 +3542,13 @@ qemuBuildMemoryCellBackendStr(virDomainDefPtr def,
>>  unsigned long long memsize = virDomainNumaGetNodeMemorySize(def->numa,
>>  cell);
>>  
>> +if (virAsprintf(, "ram-node%zu", cell) < 0)
>> +goto cleanup;
>> +
> 
> This is just a simple trivial move and not related...
> 
>>  *backendStr = NULL;
>>  mem.size = memsize;
>>  mem.targetNode = cell;
>> -
>> -if (virAsprintf(, "ram-node%zu", cell) < 0)
>> -goto cleanup;
>> +mem.info.alias = alias;

Nope. It is related. So the idea I'm implementing here is that the file
name we put onto qemu cmd line ends with the alias of the device. And
the alias is taken from mem.info.alias. Therefore I need to set it.

>>  
>>  if ((rc = qemuBuildMemoryBackendStr(, , cfg, 
>> priv->qemuCaps,
>>  def, , priv->autoNodeset, 
>> false)) < 0)
>> diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
>> index 23dcc25e1..22c023255 100644
>> --- a/src/qemu/qemu_conf.c
>> +++ b/src/qemu/qemu_conf.c
>> @@ -1750,9 +1750,41 @@ qemuGetDomainHupageMemPath(const virDomainDef *def,
>>  }
>>  
>>  
>> +int
>> +qemuGetMemoryBackingBasePath(virQEMUDriverConfigPtr cfg,
>> + char **path)
>> +{
>> +return virAsprintf(path, "%s/libvirt/qemu", cfg->memoryBackingDir);
> 
> Not of a fan of redundancy, but I'm also not sure what better way there
> is...
> 
> For privileged domains it's "/var/lib/libvirt/qemu/ram/libvirt/qemu/"
> and unpriv'd it's $XDG_CONFIG_HOME+"/qemu/ram/libvirt/qemu".
> 
> The former repeats "libvirt/qemu" while the latter only repeats "qemu".
> 
> I would think we could drop the "qemu" at the very least, but it doesn't
> matter that much to me.

Yeah. I hear you. But trying to hack around this would cause more
problems than it would solve IMO.

> 
> Random thought - do memory file paths even work for non priv'd domains?

They should. I don't see a reason why they shouldn't.

> 
> 
>> +}
>> +
>> +
>> +int
>> +qemuGetMemoryBackingDomainPath(const virDomainDef *def,
>> +   virQEMUDriverConfigPtr cfg,
>> +   char **path)
>> +{
>> +char *shortName = NULL;
>> +char *base = NULL;
>> +int ret = -1;
>> +
>> +if (!(shortName = virDomainDefGetShortName(def)) ||
>> +qemuGetMemoryBackingBasePath(cfg, ) < 0 ||
>> +virAsprintf(path, "%s/%s", base, shortName) < 0)
>> +goto cleanup;
>> +
> 
> As I noted in review of patch 1...  This continues the logic of creating
> paths that start with "-", which to me is, well, not right. Maybe others
> aren't offended by it.
> 
> Is there anyway we can prepend something here to "shortName" - I'd say
> "ram", but that's redundant with the 

Re: [libvirt] [PATCH v2 1/4] conf: s/virDomainObjGetShortName/virDomainDefGetShortName/

2017-11-02 Thread Michal Privoznik
On 11/02/2017 12:11 AM, John Ferlan wrote:
> 
> 
> On 10/24/2017 07:41 AM, Michal Privoznik wrote:
>> This function works over domain definition and not domain object.
>> Its name is thus misleading.
>>
>> Signed-off-by: Michal Privoznik 
>> ---
>>  src/conf/domain_conf.c   | 4 ++--
>>  src/conf/domain_conf.h   | 2 +-
>>  src/libvirt_private.syms | 2 +-
>>  src/qemu/qemu_conf.c | 2 +-
>>  src/qemu/qemu_domain.c   | 4 ++--
>>  src/qemu/qemu_driver.c   | 2 +-
>>  6 files changed, 8 insertions(+), 8 deletions(-)
>>
> 
> Reviewed-by: John Ferlan 
> 
> John
> 
> BTW: I wonder if this function is somewhat broken - it's using def->id
> in order to generate the output, right?  Which means for paths generated
> before the guest starts, they'll start with "-1"...  IOW: Directory and
> file names that start with "-".
> 
> This wasn't a problem originally (commit id a042275a) since the
> resulting path had "domain-" prepended to it - resulting in
> "domain--1-QEMUGuest1, for example).
> 
> Personally, this looks REALLY odd when (re)using for example with a
> hugepage test where the path is "-mem-path
> /dev/hugepages2M/libvirt/qemu/-1-QEMUGuest1".

This is because we don't set an ID for domains when testing. However, in
real scenarios this function is called over running domains. Therefore,
the path will always start with a natural number. I don't think there's
anything to fix really. We could go with "domain-" prefix but that'd be
no better in fact. Yes, it would help with 'rm' case but other than that
it has no added value.

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2 4/4] news: Document predictable file names for memory-backend-file

2017-11-02 Thread Michal Privoznik
On 11/02/2017 12:15 AM, John Ferlan wrote:
> 
> 
> On 10/24/2017 07:41 AM, Michal Privoznik wrote:
>> Signed-off-by: Michal Privoznik 
>> ---
>>  docs/news.xml | 11 +++
>>  1 file changed, 11 insertions(+)
>>
> 
> +1 for news.xml, but you'll need to be sure to update it when you get
> around to pushing this.

Yup. It'll be after the release so I will need to create whole new
structure for the new release.

Thanks.

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v5 3/3] libvirtd: fix crash on termination

2017-11-02 Thread Nikolay Shirokovskiy


On 01.11.2017 21:51, John Ferlan wrote:
> 
> 
> On 10/31/2017 02:54 AM, Nikolay Shirokovskiy wrote:
>>
>>
>> On 30.10.2017 19:21, Martin Kletzander wrote:
>>> On Mon, Oct 30, 2017 at 07:14:39AM -0400, John Ferlan wrote:
 From: Nikolay Shirokovskiy 

 The problem is incorrect order of qemu driver shutdown and shutdown
 of netserver threads that serve client requests (thru qemu driver
 particularly).

 Net server threads are shutdown upon dispose which is triggered
 by last daemon object unref at the end of main function. At the same
 time qemu driver is shutdown earlier in virStateCleanup. As a result
 netserver threads see invalid driver object in the middle of request
 processing.

 Let's move shutting down netserver threads earlier to virNetDaemonClose.

 Note: order of last daemon unref and virStateCleanup
 is introduced in 85c3a182 for a valid reason.

>>>
>>> I must say I don't believe that's true.  Reading it back, that patch is
>>> wrong IMHO.  I haven't gone through all the details of it and I don't
>>> remember them from when I rewrote part of it, but the way this should
>>> be handled is different.
>>>
>>> The way you can clearly identify such issues is when you see that one
>>> thread determines the validity of data (pointer in this case).  This
>>> must never happen.  That means that the pointer is used from more places
>>> than how many references that object has.  However each of the pointers
>>> for such object should have their own reference.
>>>
>>> So the problem is probably somewhere else.
>>
>> If I understand you correctly we can fix issue in 85c3a182 in 
>> a differenct way. Like adding reference to daemon for every
>> driver that uses it for shutdown inhibition. It will require to
>> pass unref function to driver init function or a bit of wild casting
>> to virObjectPtr for unref. Not sure it is worth it in this case.
> 
> caveat: I'm still in a post KVM Forum travel brain fog...
> 
> Perhaps a bit more simple than that... Since the 'inhibitCallback' and
> the 'inhibitOpaque' are essentially the mechanism that would pass/use
> @dmn, then it'd be up to the inhibitCallback to add/remove the reference
> with the *given* that the inhibitOpaque is a lockable object anyway...
> 
> Thus, virNetDaemonAddShutdownInhibition could use virObjectRef and
> virNetDaemonRemoveShutdownInhibition could use virObjectUnref... The
> "catch" is that for Shutdown the Unref would need to go after Unlock.
> 
> I believe that then would "replicate" the virObjectRef done in
> daemonStateInit and the virObjectUnref done at the end of
> daemonRunStateInit with respect to "some outside thread" being able to
> use @dmn and not have it be Unref'd for the last time at any point in
> time until the "consumer" was done with it.
> 
> Moving the Unref to after all the StateCleanup calls were done works
> because it accomplishesd the same/similar task, but just held onto @dmn
> longer because the original implementation didn't properly reference dmn
> when it was being used for some other consumer/thread. Of course that
> led to other problems which this series is addressing and I'm not clear
> yet as to the impact vis-a-vis your StateShutdown series.

Ok, 85c3a182 can be rewritten this way. It is more staightforward to
ref/unref by consumer thread instead of relying on consumer structure
(in this case 85c3a182 relies upon the fact that after virStateCleanup
no hypervisor driver would use @dmn object).

But we still have the issues address by this patch series and state
shutdown series because the order in which @dmn and other objects
will be disposed would be same for 85c3a182 and proposed 85c3a182
replacement.

Nikolay

> 
>>
>> Anyway the initical conditions for current patch stay the same - 
>> daemon object will be unrefed after state drivers cleanup and 
>> RPC requests could deal with invalid state driver objects.
>>
>>>
 Signed-off-by: John Ferlan 
 ---
 src/rpc/virnetdaemon.c | 1 +
 1 file changed, 1 insertion(+)

 diff --git a/src/rpc/virnetdaemon.c b/src/rpc/virnetdaemon.c
 index 8c21414897..33bd8e3b06 100644
 --- a/src/rpc/virnetdaemon.c
 +++ b/src/rpc/virnetdaemon.c
 @@ -881,6 +881,7 @@ virNetDaemonClose(virNetDaemonPtr dmn)
     virObjectLock(dmn);

     virHashForEach(dmn->servers, daemonServerClose, NULL);
 +    virHashRemoveAll(dmn->servers);

>>>
>>> If I get this correctly, you are removing the servers so that their workers 
>>> get
>>> cleaned up, but that should be done in a separate step.  Most probably what
>>> daemonServerClose should be doing.  This is a workaround, but not a proper 
>>> fix.
> 
> Are you sure?  The daemonServerClose is the callback for virHashForEach
> which means the table->iterating would be set thus preventing
> daemonServerClose from performing a virHashRemoveEntry of the server
> element from the