Re: [libvirt] [PATCH v2 07/20] network: Alter virNetworkObj @class_id to be @classIdMap

2017-08-15 Thread John Ferlan


On 08/15/2017 11:32 AM, Michal Privoznik wrote:
> On 07/26/2017 05:05 PM, John Ferlan wrote:
>> Change the variable name to be a bit more descriptive and less confusing
>> when used with the data.network.actual->class_id.
>>
>> Signed-off-by: John Ferlan 
>> ---
>>  src/conf/virnetworkobj.c| 39 ---
>>  src/conf/virnetworkobj.h|  2 +-
>>  src/network/bridge_driver.c | 10 +-
>>  3 files changed, 26 insertions(+), 25 deletions(-)
>>
>> diff --git a/src/conf/virnetworkobj.c b/src/conf/virnetworkobj.c
>> index 533ed26..fb533b9 100644
>> --- a/src/conf/virnetworkobj.c
>> +++ b/src/conf/virnetworkobj.c
>> @@ -79,13 +79,13 @@ virNetworkObjNew(void)
>>  if (!(net = virObjectLockableNew(virNetworkObjClass)))
>>  return NULL;
>>  
>> -if (!(net->class_id = virBitmapNew(CLASS_ID_BITMAP_SIZE)))
>> +if (!(net->classIdMap = virBitmapNew(CLASS_ID_BITMAP_SIZE)))
>>  goto error;
> 
> One thing I just realized: This creates a bitmap capable of holding
> 1<<16 bits - IOW it allocates 8KB of ram even though in reality just the
> first 10-20 bits are going to be used. Back in the day when I was
> implementing this there were no self-inflating bitmaps, but I guess we
> do have them now, right? Maybe we can rewrite this (ofc after you merge
> these to avoid conflicts) so that the new bitmap is used?
> 
> Michal
> 

I never gave it a second thought ;-) I haven't really dug into the
bitmap code, but I see virBitmapExpand, virBitmapSetBitExpand, and
virBitmapClearBitExpand that would seem to be useful for someone willing
to make that kind of alteration.

John

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] qemu: undefine is not allowed during domain starting up

2017-08-15 Thread John Ferlan


On 07/22/2017 04:55 AM, Yi Wang wrote:
> Start a domain whilst undefine it, if starting failed duing ProcessLaunch,
> on which period qemu exited unexpectedly, the operation will lead to failure
> of undefine the domain until libvirtd restarted. The reason is that libvirtd
> will unlock vm during qemuProcessStart, qemuDomainUndefineFlags can get the
> lock and set vm->persistent 0 but not remove the "active" domain.
> 
> Signed-off-by: Yi Wang 
> ---
>  src/conf/domain_conf.h  | 1 +
>  src/qemu/qemu_driver.c  | 6 ++
>  src/qemu/qemu_process.c | 3 +++
>  3 files changed, 10 insertions(+)
> 

Can you apply a couple of recent patches, see:

https://www.redhat.com/archives/libvir-list/2017-August/msg00389.html

and see if those would resolve what you're seeing... without these
changes of course...

Tks -

John

> diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
> index af15ee8..f339f84 100644
> --- a/src/conf/domain_conf.h
> +++ b/src/conf/domain_conf.h
> @@ -2468,6 +2468,7 @@ struct _virDomainObj {
>  virDomainSnapshotObjPtr current_snapshot;
>  
>  bool hasManagedSave;
> +bool starting;
>  
>  void *privateData;
>  void (*privateDataFreeFunc)(void *);
> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
> index 6568def..5d9acfc 100644
> --- a/src/qemu/qemu_driver.c
> +++ b/src/qemu/qemu_driver.c
> @@ -7317,6 +7317,12 @@ qemuDomainUndefineFlags(virDomainPtr dom,
>  if (!(vm = qemuDomObjFromDomain(dom)))
>  return -1;
>  
> +if (vm->starting) {
> +virReportError(VIR_ERR_OPERATION_INVALID,
> +   "%s", _("cannot undefine during domain starting up"));
> +goto cleanup;
> +}
> +
>  cfg = virQEMUDriverGetConfig(driver);
>  
>  if (virDomainUndefineFlagsEnsureACL(dom->conn, vm->def) < 0)
> diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
> index 525521a..7b708be 100644
> --- a/src/qemu/qemu_process.c
> +++ b/src/qemu/qemu_process.c
> @@ -5847,6 +5847,8 @@ qemuProcessStart(virConnectPtr conn,
>  if (!migrateFrom && !snapshot)
>  flags |= VIR_QEMU_PROCESS_START_NEW;
>  
> +vm->starting = true;
> +
>  if (qemuProcessInit(driver, vm, updatedCPU,
>  asyncJob, !!migrateFrom, flags) < 0)
>  goto cleanup;
> @@ -5892,6 +5894,7 @@ qemuProcessStart(virConnectPtr conn,
>  ret = 0;
>  
>   cleanup:
> +vm->starting = false;
>  qemuProcessIncomingDefFree(incoming);
>  return ret;
>  
> 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] question about locking in qemuDomainObjBeginJobInternal()

2017-08-15 Thread Chris Friesen

Hi,

I'm hitting a scenario (on libvirt 1.2.12, so yeah it's a bit old) where I'm 
attempting to create two domains at the same time, and they both end up erroring 
out with "cannot acquire state change lock":



2017-08-14T12:57:00.000 79674: warning : qemuDomainObjBeginJobInternal:1380 : 
Cannot start job (modify, none) for domain instance-0001; current job is 
(modify, none) owned by (79673, 0)
2017-08-14T12:57:00.000 79674: error : qemuDomainObjBeginJobInternal:1385 : 
Timed out during operation: cannot acquire state change lock
2017-08-14T12:57:01.000 79675: warning : qemuDomainObjBeginJobInternal:1380 : 
Cannot start job (modify, none) for domain instance-0002; current job is 
(modify, none) owned by (79677, 0)
2017-08-14T12:57:01.000 79675: error : qemuDomainObjBeginJobInternal:1385 : 
Timed out during operation: cannot acquire state change lock


Given that the lock appears to be per-domain, I assume this means that something 
is trying to issue multiple operations in parallel to each domain?


Chris

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2 18/20] network: Rename @filter to @aclfilter

2017-08-15 Thread John Ferlan


On 08/15/2017 11:32 AM, Michal Privoznik wrote:
> On 07/26/2017 05:05 PM, John Ferlan wrote:
>> Rename the virNetworkObjListFilter to be virNetworkObjListACLFilter
>> since that's more representative of what it is. Also modify the
>> variable @filter to be @aclfilter. In the future adding the ability
>> to describe a generic @filter routine for the Export functions
>> could be a useful thing.
> 
> Well technically this is a filter. It's only that we use ACL filter
> function for it. But the implementation is generic enough for the cb to
> be called filter IMO. Therefore I'm not a fan of this one.
> 
> Michal
> 

Understood - I can drop it, but then it's different than what I've
already done in nwfilter, secret, nodedevice, and storage.

"Originally" there was an ACL Filter and a "match" filter - the various
Export API's have a Match API which take 2 params (obj, flags) and
returns a bool. If there was a really common object, then that function
could be added as a filter callback function in a very generic export
function.

John

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2 17/20] network: Use @maxnames instead of @nnames

2017-08-15 Thread John Ferlan


On 08/15/2017 11:32 AM, Michal Privoznik wrote:
> On 07/26/2017 05:05 PM, John Ferlan wrote:
>> To be consistent with the API definition, use the @maxnames instead
>> of @nnames when describing/comparing against the maximum names to
>> be provided for the *ConnectList[Defined]Networks APIs.
>>
>> Signed-off-by: John Ferlan 
>> ---
>>  src/conf/virnetworkobj.c| 10 +-
>>  src/conf/virnetworkobj.h|  2 +-
>>  src/network/bridge_driver.c |  8 
>>  src/test/test_driver.c  |  8 
>>  4 files changed, 14 insertions(+), 14 deletions(-)
>>
>> diff --git a/src/conf/virnetworkobj.c b/src/conf/virnetworkobj.c
>> index 43fc2cf..d288dd0 100644
>> --- a/src/conf/virnetworkobj.c
>> +++ b/src/conf/virnetworkobj.c
>> @@ -1438,7 +1438,7 @@ struct virNetworkObjListGetHelperData {
>>  virConnectPtr conn;
>>  virNetworkObjListFilter filter;
>>  char **names;
>> -int nnames;
>> +int maxnames;
>>  bool active;
>>  int got;
>>  bool error;
> 
> Correct. @nnames may suggest there's @nnames items in @names array.
> However, there's @got items! Very confusing.

True, changing @got to @nnames fixes that though...  and of course
moving it closer to char **names;... and yet another reason why all this
has been "confusing" thus far...  Would you like to see that patch or is
there enough trust that I know how to do that safely ;-).

I probably wrote this with future code in mind where all those values
are clearer anyway.

John

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2 11/20] network: Introduce virNetworkObj{Get|Set}Autostart

2017-08-15 Thread John Ferlan


On 08/15/2017 11:32 AM, Michal Privoznik wrote:
> On 07/26/2017 05:05 PM, John Ferlan wrote:
>> In preparation for privatizing the virNetworkObj structure, create
>> accessors for the obj->autostart.
>>
>> Signed-off-by: John Ferlan 
>> ---
>>  src/conf/virnetworkobj.c| 15 +++
>>  src/conf/virnetworkobj.h|  9 -
>>  src/libvirt_private.syms|  2 ++
>>  src/network/bridge_driver.c | 20 ++--
>>  src/test/test_driver.c  |  5 +++--
>>  5 files changed, 38 insertions(+), 13 deletions(-)
>>
>> diff --git a/src/conf/virnetworkobj.c b/src/conf/virnetworkobj.c
>> index 631f8cd..36d4bff 100644
>> --- a/src/conf/virnetworkobj.c
>> +++ b/src/conf/virnetworkobj.c
>> @@ -129,6 +129,21 @@ virNetworkObjGetNewDef(virNetworkObjPtr obj)
>>  }
>>  
>>  
>> +int
>> +virNetworkObjGetAutostart(virNetworkObjPtr obj)
>> +{
>> +return obj->autostart;
>> +}
>> +
>> +
>> +void
>> +virNetworkObjSetAutostart(virNetworkObjPtr obj,
>> +  int autostart)
>> +{
>> +obj->autostart = autostart;
>> +}
>> +
>> +
>>  pid_t
>>  virNetworkObjGetDnsmasqPid(virNetworkObjPtr obj)
>>  {
>> diff --git a/src/conf/virnetworkobj.h b/src/conf/virnetworkobj.h
>> index 90ce0ca..a526d30 100644
>> --- a/src/conf/virnetworkobj.h
>> +++ b/src/conf/virnetworkobj.h
>> @@ -32,7 +32,7 @@ struct _virNetworkObj {
>>  pid_t dnsmasqPid;
>>  pid_t radvdPid;
>>  unsigned int active : 1;
>> -unsigned int autostart : 1;
>> +int autostart;
> 
> Since we are touching this does it make sense to convert this to bool?
> Interestingly, you're doing that conversion for @active in the next patch.
> 

I think because it was an external API provided value I left it at
'int'. For the other two active and persistent - those are boolean
states as a result of other internal operations and not outwardly
facing.  IOW: There's no virNetworkSetPersistent or virNetworkSetActive
type API's.

I think also the GetAutostart algorithm which assigns *autostart =
obj->autostart caused me to pause especially since the code is
"repeated" in domain, storage, and test. Initially I think I was
thinking of combining all those places that make the same sequence of
calls, but that got shot down. I think changing it to a bool would
probably be a "next step" exercise, but "technically", the Get algorithm
would be *autostart = obj->autostart ? 1 : 0; as opposed to the current
*autostart = obj->autostart;

>>  unsigned int persistent : 1;
>>  
>>  virNetworkDefPtr def; /* The current definition */
>> @@ -60,6 +60,13 @@ virNetworkObjSetDef(virNetworkObjPtr obj,
>>  virNetworkDefPtr
>>  virNetworkObjGetNewDef(virNetworkObjPtr obj);
>>  
>> +int
>> +virNetworkObjGetAutostart(virNetworkObjPtr obj);
>> +
>> +void
>> +virNetworkObjSetAutostart(virNetworkObjPtr obj,
>> +  int autostart);
>> +
>>  virMacMapPtr
>>  virNetworkObjGetMacMap(virNetworkObjPtr obj);
>>  
>> diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
>> index 84e3eb7..8a3284f 100644
>> --- a/src/libvirt_private.syms
>> +++ b/src/libvirt_private.syms
>> @@ -944,6 +944,7 @@ virNetworkObjFindByName;
>>  virNetworkObjFindByNameLocked;
>>  virNetworkObjFindByUUID;
>>  virNetworkObjFindByUUIDLocked;
>> +virNetworkObjGetAutostart;
>>  virNetworkObjGetClassIdMap;
>>  virNetworkObjGetDef;
>>  virNetworkObjGetDnsmasqPid;
>> @@ -966,6 +967,7 @@ virNetworkObjNew;
>>  virNetworkObjRemoveInactive;
>>  virNetworkObjReplacePersistentDef;
>>  virNetworkObjSaveStatus;
>> +virNetworkObjSetAutostart;
>>  virNetworkObjSetDef;
>>  virNetworkObjSetDefTransient;
>>  virNetworkObjSetDnsmasqPid;
>> diff --git a/src/network/bridge_driver.c b/src/network/bridge_driver.c
>> index b4fbfc5..eb814d3 100644
>> --- a/src/network/bridge_driver.c
>> +++ b/src/network/bridge_driver.c
>> @@ -525,7 +525,7 @@ networkAutostartConfig(virNetworkObjPtr obj,
>>  int ret = -1;
>>  
>>  virObjectLock(obj);
>> -if (obj->autostart &&
>> +if (virNetworkObjGetAutostart(obj) &&
>>  !virNetworkObjIsActive(obj) &&
>>  networkStartNetwork(driver, obj) < 0)
>>  goto cleanup;
>> @@ -3963,7 +3963,7 @@ networkGetAutostart(virNetworkPtr net,
>>  if (virNetworkGetAutostartEnsureACL(net->conn, 
>> virNetworkObjGetDef(obj)) < 0)
>>  goto cleanup;
>>  
>> -*autostart = obj->autostart;
>> +*autostart = virNetworkObjGetAutostart(obj);
>>  ret = 0;
>>  
>>   cleanup:
>> @@ -3974,12 +3974,13 @@ networkGetAutostart(virNetworkPtr net,
>>  
>>  static int
>>  networkSetAutostart(virNetworkPtr net,
>> -int autostart)
>> +int new_autostart)
>>  {
>>  virNetworkDriverStatePtr driver = networkGetDriver();
>>  virNetworkObjPtr obj;
>>  virNetworkDefPtr def;
>>  char *configFile = NULL, *autostartLink = NULL;
>> +int cur_autostart;
>>  int ret = -1;
>>  
>>  if (!(obj = networkObjFromNetwork(net)))
>> @@ -3995,9 +3996,9 @@ 

Re: [libvirt] [PATCH 3/3] tests: deterministichash: Make hash tables arch-independent

2017-08-15 Thread John Ferlan


On 08/15/2017 11:47 AM, Peter Krempa wrote:
> On Tue, Aug 15, 2017 at 10:02:38 -0400, John Ferlan wrote:
>> On 08/15/2017 08:01 AM, Peter Krempa wrote:
>>> On Mon, Aug 14, 2017 at 20:46:10 -0400, John Ferlan wrote:
 On 08/03/2017 04:50 AM, Peter Krempa wrote:
>>>
>>> [ trimmed off-topic part ]
>>>
 NB: I didn't dig into the qemumonitorjsontest algorithm, but...

 While going through the common object changes, I ended up looking at and
 thinking about the hash table algorithms, initially it was just the
 "grow" algorithm as I think it is a bit "aggressive" (perhaps wasteful)
 since as soon as 1 bucket chain exceeds 8 elements, the table is resized
 by 8 unless the new size is 8 * 2048 - at which time no matter how full
 a bucket is we don't resize the table
>>>
>>> Resizing the table is very expensive, so it makes sense to scale it
>>> aggresively. At some point though it would take too much memory.
>>>
>>> If some place in the code expects massive hash table, it can set the
>>> hash table to > 2048 manually.
>>>
>>
>> Right - avoiding excessive or unnecessary resizing is something I was
>> considering. One thing that was at the back of my mind is "somewhat
>> recent" discussions about 10s of thousands of disk/volumes. IIRC it's
>> been discussed on qemu list as it relates to libguestfs.
> 
> If we assume that our hash function has an even distribution of hashes
> (which was not refuted yet) then for 10k entries you should have 5ish
> entries per bucket. Even with 100k you are at 50 per bucket. That is
> totally managable.
> 

I really hadn't done any analysis, I was in the wondering stage and
setting up the experiment. It's a part of the code I didn't know much
about, so it was also a learning exercise. I never knew anything about
murmurhash nor siphash before today!

>> Because volume names tend to be less random than a UUID, I was looking
>> at how the existing algorithms operate when you have say "disk1"
>> through "disk9" w/r/t bucket layout and length of chains when we
>> reach the maximum bucket count.
>>
>> Still even with that, resizing a table just because one hash bucket
>> chain goes above 8 just seemed unnecessary. Still a work in progress
>> though - I may come up with nothing. It's the classic power struggle of
>> time vs. space.
> 
> (This was covered by dan's reply)
> 
 The next thing I considered was using a prime number as the table bucket
 size and it seems by just doing that will cause qemumonitorjsontest to
>>>
>>> What would be the benefit of such change? I don't really see how prime
>>> numbers would improve anything performance-wise.
>>>
>>
>> Nothing performance wise directly that comes to mind. I see that while
>> composing Michal and Daniel have responded.
>>
>> FWIW: My hacking has altered virhashtest.c to generate 1000's of UUIDs
>> and "elemN" names in order to check if/when one or the other has a
>> bucket full problem with the existing algorithms.  It's been a couple
>> weeks since I was looking at the data - so results are not fresh in mind
>> especially since I was on PTO last week.
>>
 fail. Easy enough to reproduce by changing the virHashCreate call in
 qemuBlockNodeNameGetBackingChain to use 53 instead of 50 (even 49 causes
 failure). It doesn't matter if the test source also adjusts the initial
 hash size.

 Of course this makes sense given that virHashCodeGen is the backend of
 the virHashComputeKey algorithm that uses the return value %
 table->size. So if "size" changes, then obviously order changes.
>>>
>>> As you've said that is expected. I don't quite follow what is your
>>> point here.
>>>
>>
>> I was pointing out that by changing the size you get different ordered
>> results, nothing more, nothing less. It just so happens that 50 returns
>> this set of results, but 53 is different.
>>
 While I get the code bloat conundrum, but if the order doesn't matter
 then I wonder what other variables could change that would affect this
 test and whether we really want to be constantly altering the mocks
 "just" to get the right answer for this test.
>>>
>>> The order depends on the result of the hasing function, the number of
>>> buckets and the order you've added the entries. All of those is
>>> expected when dealing with a hash table.
>>>
>>> After this last patch, you are guaranteed to have the hash function
>>> return deterministic results (even on big endian-boxes).
>>>
>>
>> Change the size to 53, rebuild, and run the test. It'll take a few
>> minutes, but I think you'd see a failure.
> 
> I fail to see the point of such excercise. It is expected that it will
> happen. If we wanted to make the testsuite slightly more robust in this
> regard, we could also mandate a fixed hash size for the mocked hash, but
> I don't quite feel the need to do so.
> 

Got it. You asked a question, I answered.  The only reason I used this
particular patch to respond was because this 

Re: [libvirt] [RFC v4 00/13] qmp: query-device-slots command

2017-08-15 Thread Eduardo Habkost
On Tue, Aug 15, 2017 at 01:57:50PM -0500, Eric Blake wrote:
> On 08/14/2017 04:57 PM, Eduardo Habkost wrote:
> > Changelog
> > -
> > 
> > Changes v3 -> v4:
> > * New compact representation of slot sets.
> > * New generic code to automatically merge similar slots
> >   into a single entry in the command output while keeping
> >   implementations of the method simpler.
> > * Example implementation of IDE and USB bus enumeration
> 
> > 
> > Slot sets are represented by a list of option names and sets of
> > possible values for each of those options.  The command uses a
> > compact representation for the set of valid values for an option.
> > For example, the following set of 5 PCI functions:
> > 
> >   bus: pcie.0
> >   device-number: 31
> >   functions: 1,4,5,6,7
> > 
> > would be represented in the JSON data as:
> > 
> >   {"available":false,"count":5,
> >"device-types":["pci-device"],"hotpluggable":false,
> >"opts":[
> >   {"option":"function","values":[1,[4,7]]},
> 
> A list (and not just a single-type list, but a list that mixes scalar
> and sublist),
> 
> >   {"option":"device-number","values":31},
> 
> vs. a scalar.  Why not a one-element array?

It was just to keep the representation as compact as possible, in
the common case of single-value sets.  Probably we can drop that
feature as it saves only 2 bytes in the JSON representation.

> 
> >   {"option":"bus","values":"pcie.0"}],
> >"opts-complete":true}
> > 
> > I planned to use QAPI alternates to model/document that in the
> > schema, but it would require implementing a few missing features
> > in QAPI alternate support.
> 
> Yeah, I can see how existing QAPI alternates do not yet support arrays,
> which becomes important to your representation.  Do you need help
> getting the QAPI generator improved to support a particular feature that
> you found to be lacking?

I think the lack of support for lists on alternates was the main
obstacle.

Probably we would also need to remove the restriction against
alternates with ambiguous string representations, to allow a
list/number/string/bool alternate to be defined.

Being able to set constraints on the number of elements of a list
would be nice to have, but not required.

-- 
Eduardo

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [RFC v4 00/13] qmp: query-device-slots command

2017-08-15 Thread Eric Blake
On 08/14/2017 04:57 PM, Eduardo Habkost wrote:
> Changelog
> -
> 
> Changes v3 -> v4:
> * New compact representation of slot sets.
> * New generic code to automatically merge similar slots
>   into a single entry in the command output while keeping
>   implementations of the method simpler.
> * Example implementation of IDE and USB bus enumeration

> 
> Slot sets are represented by a list of option names and sets of
> possible values for each of those options.  The command uses a
> compact representation for the set of valid values for an option.
> For example, the following set of 5 PCI functions:
> 
>   bus: pcie.0
>   device-number: 31
>   functions: 1,4,5,6,7
> 
> would be represented in the JSON data as:
> 
>   {"available":false,"count":5,
>"device-types":["pci-device"],"hotpluggable":false,
>"opts":[
>   {"option":"function","values":[1,[4,7]]},

A list (and not just a single-type list, but a list that mixes scalar
and sublist),

>   {"option":"device-number","values":31},

vs. a scalar.  Why not a one-element array?

>   {"option":"bus","values":"pcie.0"}],
>"opts-complete":true}
> 
> I planned to use QAPI alternates to model/document that in the
> schema, but it would require implementing a few missing features
> in QAPI alternate support.

Yeah, I can see how existing QAPI alternates do not yet support arrays,
which becomes important to your representation.  Do you need help
getting the QAPI generator improved to support a particular feature that
you found to be lacking?

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



signature.asc
Description: OpenPGP digital signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 3/3] tests: deterministichash: Make hash tables arch-independent

2017-08-15 Thread Peter Krempa
On Tue, Aug 15, 2017 at 10:02:38 -0400, John Ferlan wrote:
> On 08/15/2017 08:01 AM, Peter Krempa wrote:
> > On Mon, Aug 14, 2017 at 20:46:10 -0400, John Ferlan wrote:
> >> On 08/03/2017 04:50 AM, Peter Krempa wrote:
> > 
> > [ trimmed off-topic part ]
> > 
> >> NB: I didn't dig into the qemumonitorjsontest algorithm, but...
> >>
> >> While going through the common object changes, I ended up looking at and
> >> thinking about the hash table algorithms, initially it was just the
> >> "grow" algorithm as I think it is a bit "aggressive" (perhaps wasteful)
> >> since as soon as 1 bucket chain exceeds 8 elements, the table is resized
> >> by 8 unless the new size is 8 * 2048 - at which time no matter how full
> >> a bucket is we don't resize the table
> > 
> > Resizing the table is very expensive, so it makes sense to scale it
> > aggresively. At some point though it would take too much memory.
> > 
> > If some place in the code expects massive hash table, it can set the
> > hash table to > 2048 manually.
> > 
> 
> Right - avoiding excessive or unnecessary resizing is something I was
> considering. One thing that was at the back of my mind is "somewhat
> recent" discussions about 10s of thousands of disk/volumes. IIRC it's
> been discussed on qemu list as it relates to libguestfs.

If we assume that our hash function has an even distribution of hashes
(which was not refuted yet) then for 10k entries you should have 5ish
entries per bucket. Even with 100k you are at 50 per bucket. That is
totally managable.

> Because volume names tend to be less random than a UUID, I was looking
> at how the existing algorithms operate when you have say "disk1"
> through "disk9" w/r/t bucket layout and length of chains when we
> reach the maximum bucket count.
> 
> Still even with that, resizing a table just because one hash bucket
> chain goes above 8 just seemed unnecessary. Still a work in progress
> though - I may come up with nothing. It's the classic power struggle of
> time vs. space.

(This was covered by dan's reply)

> >> The next thing I considered was using a prime number as the table bucket
> >> size and it seems by just doing that will cause qemumonitorjsontest to
> > 
> > What would be the benefit of such change? I don't really see how prime
> > numbers would improve anything performance-wise.
> > 
> 
> Nothing performance wise directly that comes to mind. I see that while
> composing Michal and Daniel have responded.
> 
> FWIW: My hacking has altered virhashtest.c to generate 1000's of UUIDs
> and "elemN" names in order to check if/when one or the other has a
> bucket full problem with the existing algorithms.  It's been a couple
> weeks since I was looking at the data - so results are not fresh in mind
> especially since I was on PTO last week.
> 
> >> fail. Easy enough to reproduce by changing the virHashCreate call in
> >> qemuBlockNodeNameGetBackingChain to use 53 instead of 50 (even 49 causes
> >> failure). It doesn't matter if the test source also adjusts the initial
> >> hash size.
> >>
> >> Of course this makes sense given that virHashCodeGen is the backend of
> >> the virHashComputeKey algorithm that uses the return value %
> >> table->size. So if "size" changes, then obviously order changes.
> > 
> > As you've said that is expected. I don't quite follow what is your
> > point here.
> > 
> 
> I was pointing out that by changing the size you get different ordered
> results, nothing more, nothing less. It just so happens that 50 returns
> this set of results, but 53 is different.
> 
> >> While I get the code bloat conundrum, but if the order doesn't matter
> >> then I wonder what other variables could change that would affect this
> >> test and whether we really want to be constantly altering the mocks
> >> "just" to get the right answer for this test.
> > 
> > The order depends on the result of the hasing function, the number of
> > buckets and the order you've added the entries. All of those is
> > expected when dealing with a hash table.
> > 
> > After this last patch, you are guaranteed to have the hash function
> > return deterministic results (even on big endian-boxes).
> > 
> 
> Change the size to 53, rebuild, and run the test. It'll take a few
> minutes, but I think you'd see a failure.

I fail to see the point of such excercise. It is expected that it will
happen. If we wanted to make the testsuite slightly more robust in this
regard, we could also mandate a fixed hash size for the mocked hash, but
I don't quite feel the need to do so.

> > The hash size is not really altered very often since you can't really
> > see a measurable performance benefit in most cases. This is also
> > strenghtened by the fact, that the hash function is randomized by a
> > random number, so in real usage you won't be able to deterministically
> > cause hash collisions.
> 
> True... at 50 with 4 elements you won't see a resize. You could possibly
> have a table size of 1 in this instance and be fine, but that 

Re: [libvirt] [PATCH v2 11/20] network: Introduce virNetworkObj{Get|Set}Autostart

2017-08-15 Thread Michal Privoznik
On 07/26/2017 05:05 PM, John Ferlan wrote:
> In preparation for privatizing the virNetworkObj structure, create
> accessors for the obj->autostart.
> 
> Signed-off-by: John Ferlan 
> ---
>  src/conf/virnetworkobj.c| 15 +++
>  src/conf/virnetworkobj.h|  9 -
>  src/libvirt_private.syms|  2 ++
>  src/network/bridge_driver.c | 20 ++--
>  src/test/test_driver.c  |  5 +++--
>  5 files changed, 38 insertions(+), 13 deletions(-)
> 
> diff --git a/src/conf/virnetworkobj.c b/src/conf/virnetworkobj.c
> index 631f8cd..36d4bff 100644
> --- a/src/conf/virnetworkobj.c
> +++ b/src/conf/virnetworkobj.c
> @@ -129,6 +129,21 @@ virNetworkObjGetNewDef(virNetworkObjPtr obj)
>  }
>  
>  
> +int
> +virNetworkObjGetAutostart(virNetworkObjPtr obj)
> +{
> +return obj->autostart;
> +}
> +
> +
> +void
> +virNetworkObjSetAutostart(virNetworkObjPtr obj,
> +  int autostart)
> +{
> +obj->autostart = autostart;
> +}
> +
> +
>  pid_t
>  virNetworkObjGetDnsmasqPid(virNetworkObjPtr obj)
>  {
> diff --git a/src/conf/virnetworkobj.h b/src/conf/virnetworkobj.h
> index 90ce0ca..a526d30 100644
> --- a/src/conf/virnetworkobj.h
> +++ b/src/conf/virnetworkobj.h
> @@ -32,7 +32,7 @@ struct _virNetworkObj {
>  pid_t dnsmasqPid;
>  pid_t radvdPid;
>  unsigned int active : 1;
> -unsigned int autostart : 1;
> +int autostart;

Since we are touching this does it make sense to convert this to bool?
Interestingly, you're doing that conversion for @active in the next patch.

>  unsigned int persistent : 1;
>  
>  virNetworkDefPtr def; /* The current definition */
> @@ -60,6 +60,13 @@ virNetworkObjSetDef(virNetworkObjPtr obj,
>  virNetworkDefPtr
>  virNetworkObjGetNewDef(virNetworkObjPtr obj);
>  
> +int
> +virNetworkObjGetAutostart(virNetworkObjPtr obj);
> +
> +void
> +virNetworkObjSetAutostart(virNetworkObjPtr obj,
> +  int autostart);
> +
>  virMacMapPtr
>  virNetworkObjGetMacMap(virNetworkObjPtr obj);
>  
> diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
> index 84e3eb7..8a3284f 100644
> --- a/src/libvirt_private.syms
> +++ b/src/libvirt_private.syms
> @@ -944,6 +944,7 @@ virNetworkObjFindByName;
>  virNetworkObjFindByNameLocked;
>  virNetworkObjFindByUUID;
>  virNetworkObjFindByUUIDLocked;
> +virNetworkObjGetAutostart;
>  virNetworkObjGetClassIdMap;
>  virNetworkObjGetDef;
>  virNetworkObjGetDnsmasqPid;
> @@ -966,6 +967,7 @@ virNetworkObjNew;
>  virNetworkObjRemoveInactive;
>  virNetworkObjReplacePersistentDef;
>  virNetworkObjSaveStatus;
> +virNetworkObjSetAutostart;
>  virNetworkObjSetDef;
>  virNetworkObjSetDefTransient;
>  virNetworkObjSetDnsmasqPid;
> diff --git a/src/network/bridge_driver.c b/src/network/bridge_driver.c
> index b4fbfc5..eb814d3 100644
> --- a/src/network/bridge_driver.c
> +++ b/src/network/bridge_driver.c
> @@ -525,7 +525,7 @@ networkAutostartConfig(virNetworkObjPtr obj,
>  int ret = -1;
>  
>  virObjectLock(obj);
> -if (obj->autostart &&
> +if (virNetworkObjGetAutostart(obj) &&
>  !virNetworkObjIsActive(obj) &&
>  networkStartNetwork(driver, obj) < 0)
>  goto cleanup;
> @@ -3963,7 +3963,7 @@ networkGetAutostart(virNetworkPtr net,
>  if (virNetworkGetAutostartEnsureACL(net->conn, virNetworkObjGetDef(obj)) 
> < 0)
>  goto cleanup;
>  
> -*autostart = obj->autostart;
> +*autostart = virNetworkObjGetAutostart(obj);
>  ret = 0;
>  
>   cleanup:
> @@ -3974,12 +3974,13 @@ networkGetAutostart(virNetworkPtr net,
>  
>  static int
>  networkSetAutostart(virNetworkPtr net,
> -int autostart)
> +int new_autostart)
>  {
>  virNetworkDriverStatePtr driver = networkGetDriver();
>  virNetworkObjPtr obj;
>  virNetworkDefPtr def;
>  char *configFile = NULL, *autostartLink = NULL;
> +int cur_autostart;
>  int ret = -1;
>  
>  if (!(obj = networkObjFromNetwork(net)))
> @@ -3995,9 +3996,9 @@ networkSetAutostart(virNetworkPtr net,
>  goto cleanup;
>  }
>  
> -autostart = (autostart != 0);
> -
> -if (obj->autostart != autostart) {
> +new_autostart = (new_autostart != 0);
> +cur_autostart = virNetworkObjGetAutostart(obj);
> +if (cur_autostart != new_autostart) {
>  if ((configFile = virNetworkConfigFile(driver->networkConfigDir,
> def->name)) == NULL)
>  goto cleanup;
> @@ -4005,7 +4006,7 @@ networkSetAutostart(virNetworkPtr net,
>def->name)) == NULL)
>  goto cleanup;
>  
> -if (autostart) {
> +if (new_autostart) {
>  if (virFileMakePath(driver->networkAutostartDir) < 0) {
>  virReportSystemError(errno,
>   _("cannot create autostart directory 
> '%s'"),
> @@ -4028,13 +4029,12 @@ networkSetAutostart(virNetworkPtr net,

Re: [libvirt] [PATCH v2 07/20] network: Alter virNetworkObj @class_id to be @classIdMap

2017-08-15 Thread Michal Privoznik
On 07/26/2017 05:05 PM, John Ferlan wrote:
> Change the variable name to be a bit more descriptive and less confusing
> when used with the data.network.actual->class_id.
> 
> Signed-off-by: John Ferlan 
> ---
>  src/conf/virnetworkobj.c| 39 ---
>  src/conf/virnetworkobj.h|  2 +-
>  src/network/bridge_driver.c | 10 +-
>  3 files changed, 26 insertions(+), 25 deletions(-)
> 
> diff --git a/src/conf/virnetworkobj.c b/src/conf/virnetworkobj.c
> index 533ed26..fb533b9 100644
> --- a/src/conf/virnetworkobj.c
> +++ b/src/conf/virnetworkobj.c
> @@ -79,13 +79,13 @@ virNetworkObjNew(void)
>  if (!(net = virObjectLockableNew(virNetworkObjClass)))
>  return NULL;
>  
> -if (!(net->class_id = virBitmapNew(CLASS_ID_BITMAP_SIZE)))
> +if (!(net->classIdMap = virBitmapNew(CLASS_ID_BITMAP_SIZE)))
>  goto error;

One thing I just realized: This creates a bitmap capable of holding
1<<16 bits - IOW it allocates 8KB of ram even though in reality just the
first 10-20 bits are going to be used. Back in the day when I was
implementing this there were no self-inflating bitmaps, but I guess we
do have them now, right? Maybe we can rewrite this (ofc after you merge
these to avoid conflicts) so that the new bitmap is used?

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2 18/20] network: Rename @filter to @aclfilter

2017-08-15 Thread Michal Privoznik
On 07/26/2017 05:05 PM, John Ferlan wrote:
> Rename the virNetworkObjListFilter to be virNetworkObjListACLFilter
> since that's more representative of what it is. Also modify the
> variable @filter to be @aclfilter. In the future adding the ability
> to describe a generic @filter routine for the Export functions
> could be a useful thing.

Well technically this is a filter. It's only that we use ACL filter
function for it. But the implementation is generic enough for the cb to
be called filter IMO. Therefore I'm not a fan of this one.

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2 00/20] Make the virNetworkObjPtr private

2017-08-15 Thread Michal Privoznik
On 07/26/2017 05:05 PM, John Ferlan wrote:
> v1: https://www.redhat.com/archives/libvir-list/2017-May/msg00701.html
> (but reviewed much more recently)
> 
> NOTE from v1:
> Patches 1-3 already pushed
> 
> Former patch 4:
>  * Patch 1 (NEW) - splits out the formatting change in bridge_driver.h
>  * Patch 2 - Remainder of the change for consistent naming
>  NB: Without the split, there was a R-B, but I didn't push so that the
>  split could be "seen".
> 
> Former patch 5:
>  * Patch 3 (NEW) - from review create virMacMapFileName in 
> src/util/virmacmap.c
>  * Patch 4 - Remainder of the previous change w/ adjusted name of course
>  * Patch 5 (NEW) - from review, alter virNetworkObjUnrefMacMap
> 
> Former patch 6:
>  * Patch 6 - No change
> 
> Former patch 7:
>  * Patch 7 (NEW) - Split out the @class_id to @classIdMap change
>  * Patch 8 - Remainder of previous change
> 
> Former patch 8:
>  * Patch 9 - No change
> 
> Former patch 9:
>  * Patch 10 - Make suggested naming adjustments
>   Add/use virNetworkObjSetDef API
> 
> Former patch 10:
>  * Patch 11 - Move code back to driver, just have accessors for @autostart
> 
> Former patch 11:
>  * Patch 12 - No change
> 
> Former patch 12:
>  * Patch 13 - Use virNetworkObjIsPersistent in networkSetAutostart
> 
> Former patch 13:
>  * Patch 14 - No change
> 
> Former patch 14:
>  * Patch 15 - Just have the virNetworkObjNew lock the object now and make
>   use of that with using virNetworkObjEndAPI in 
> networkxml2conftest
>   NB: Since we'll have a refcnt=1 and lock=true after New the
>   EndAPI is proper
>  * Patch 16 (NEW) - Just move the virObjectRef - makes it clearer why it's
> being ref'd
> Former patch 15:
>  * Patch 17 (NEW) - Split out the rename of @nnames to @maxnames and explain
> the reason better
>  * Patch 18 (NEW) - Split out the rename of @filter to @aclnames and explain
> the reason better
>  * Patch 19 - The remainder of the former patch
> 
> Former patch 16:
>  * Patch 20 - No change (other than merge conflict resolution)
> 
> 
> John Ferlan (20):
>   network: Perform some formatting cleanup in bridge_driver.h
>   network: Use consistent naming in bridge_driver for virNetwork objects
>   network: Move and rename networkMacMgrFileName
>   network: Move macmap mgmt from bridge_driver to virnetworkobj
>   network: Unconditionally initialize macmap when stopping virtual
> network
>   network: Add virNetworkObj Get/Set API's for @dnsmasqPid and @radvdPid
>   network: Alter virNetworkObj @class_id to be @classIdMap
>   network: Introduce virNetworkObjGetClassIdMap
>   network: Add virNetworkObj Get/Set API's for @floor_sum
>   network: Add virNetworkObj Get/Set API's for @def and @newDef
>   network: Introduce virNetworkObj{Get|Set}Autostart
>   network: Introduce virNetworkObj{Is|Set}Active
>   network: Introduce virNetworkObjIsPersistent
>   network: Consistent use of @obj for virnetworkobj
>   network: Have virNetworkObjNew lock the returned object
>   network: Move virObjectRef during AssignDef processing
>   network: Use @maxnames instead of @nnames
>   network: Rename @filter to @aclfilter
>   network: Modify naming for virNetworkObjList* fetching APIs
>   network: Privatize virNetworkObj
> 
>  src/conf/virnetworkobj.c|  614 +++---
>  src/conf/virnetworkobj.h|  105 ++--
>  src/libvirt_private.syms|   21 +
>  src/network/bridge_driver.c | 1218 
> ++-
>  src/network/bridge_driver.h |   50 +-
>  src/test/test_driver.c  |   83 +--
>  src/util/virmacmap.c|   12 +
>  src/util/virmacmap.h|4 +
>  tests/networkxml2conftest.c |   11 +-
>  9 files changed, 1231 insertions(+), 887 deletions(-)
> 

Patch 11/20 has an issue, I think some discussion is needed there (also
some bug fixing). And I find patch 18/20 not that useful. ACK to the
rest though.

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2 17/20] network: Use @maxnames instead of @nnames

2017-08-15 Thread Michal Privoznik
On 07/26/2017 05:05 PM, John Ferlan wrote:
> To be consistent with the API definition, use the @maxnames instead
> of @nnames when describing/comparing against the maximum names to
> be provided for the *ConnectList[Defined]Networks APIs.
> 
> Signed-off-by: John Ferlan 
> ---
>  src/conf/virnetworkobj.c| 10 +-
>  src/conf/virnetworkobj.h|  2 +-
>  src/network/bridge_driver.c |  8 
>  src/test/test_driver.c  |  8 
>  4 files changed, 14 insertions(+), 14 deletions(-)
> 
> diff --git a/src/conf/virnetworkobj.c b/src/conf/virnetworkobj.c
> index 43fc2cf..d288dd0 100644
> --- a/src/conf/virnetworkobj.c
> +++ b/src/conf/virnetworkobj.c
> @@ -1438,7 +1438,7 @@ struct virNetworkObjListGetHelperData {
>  virConnectPtr conn;
>  virNetworkObjListFilter filter;
>  char **names;
> -int nnames;
> +int maxnames;
>  bool active;
>  int got;
>  bool error;

Correct. @nnames may suggest there's @nnames items in @names array.
However, there's @got items! Very confusing.

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 3/3] tests: deterministichash: Make hash tables arch-independent

2017-08-15 Thread Daniel P. Berrange
On Tue, Aug 15, 2017 at 10:02:38AM -0400, John Ferlan wrote:
> 
> 
> On 08/15/2017 08:01 AM, Peter Krempa wrote:
> > On Mon, Aug 14, 2017 at 20:46:10 -0400, John Ferlan wrote:
> >> On 08/03/2017 04:50 AM, Peter Krempa wrote:
> > 
> > [ trimmed off-topic part ]
> > 
> >> NB: I didn't dig into the qemumonitorjsontest algorithm, but...
> >>
> >> While going through the common object changes, I ended up looking at and
> >> thinking about the hash table algorithms, initially it was just the
> >> "grow" algorithm as I think it is a bit "aggressive" (perhaps wasteful)
> >> since as soon as 1 bucket chain exceeds 8 elements, the table is resized
> >> by 8 unless the new size is 8 * 2048 - at which time no matter how full
> >> a bucket is we don't resize the table
> > 
> > Resizing the table is very expensive, so it makes sense to scale it
> > aggresively. At some point though it would take too much memory.
> > 
> > If some place in the code expects massive hash table, it can set the
> > hash table to > 2048 manually.
> > 
> 
> Right - avoiding excessive or unnecessary resizing is something I was
> considering. One thing that was at the back of my mind is "somewhat
> recent" discussions about 10s of thousands of disk/volumes. IIRC it's
> been discussed on qemu list as it relates to libguestfs.
> 
> Because volume names tend to be less random than a UUID, I was looking
> at how the existing algorithms operate when you have say "disk1"
> through "disk9" w/r/t bucket layout and length of chains when we
> reach the maximum bucket count.

For any sensible hash algorithm, such a "predictable" naming convention
is not a realk problem - indeed if it were a problem it would be considered
a security vulnerability these days.

While murmurhash is not a cryptographic hash, a single bit difference
in names should still produce a very different hashcode value. If we
switch to siphash though, which is a cryptographic hash, we will have
a strong guarantee of a completely different hash code for such names.

So again I don't think bucket size is vs primes is important here - the
choice of hash function more directly determines the collision resistance
we have.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 3/3] tests: deterministichash: Make hash tables arch-independent

2017-08-15 Thread John Ferlan


On 08/15/2017 08:01 AM, Peter Krempa wrote:
> On Mon, Aug 14, 2017 at 20:46:10 -0400, John Ferlan wrote:
>> On 08/03/2017 04:50 AM, Peter Krempa wrote:
> 
> [ trimmed off-topic part ]
> 
>> NB: I didn't dig into the qemumonitorjsontest algorithm, but...
>>
>> While going through the common object changes, I ended up looking at and
>> thinking about the hash table algorithms, initially it was just the
>> "grow" algorithm as I think it is a bit "aggressive" (perhaps wasteful)
>> since as soon as 1 bucket chain exceeds 8 elements, the table is resized
>> by 8 unless the new size is 8 * 2048 - at which time no matter how full
>> a bucket is we don't resize the table
> 
> Resizing the table is very expensive, so it makes sense to scale it
> aggresively. At some point though it would take too much memory.
> 
> If some place in the code expects massive hash table, it can set the
> hash table to > 2048 manually.
> 

Right - avoiding excessive or unnecessary resizing is something I was
considering. One thing that was at the back of my mind is "somewhat
recent" discussions about 10s of thousands of disk/volumes. IIRC it's
been discussed on qemu list as it relates to libguestfs.

Because volume names tend to be less random than a UUID, I was looking
at how the existing algorithms operate when you have say "disk1"
through "disk9" w/r/t bucket layout and length of chains when we
reach the maximum bucket count.

Still even with that, resizing a table just because one hash bucket
chain goes above 8 just seemed unnecessary. Still a work in progress
though - I may come up with nothing. It's the classic power struggle of
time vs. space.

>> The next thing I considered was using a prime number as the table bucket
>> size and it seems by just doing that will cause qemumonitorjsontest to
> 
> What would be the benefit of such change? I don't really see how prime
> numbers would improve anything performance-wise.
> 

Nothing performance wise directly that comes to mind. I see that while
composing Michal and Daniel have responded.

FWIW: My hacking has altered virhashtest.c to generate 1000's of UUIDs
and "elemN" names in order to check if/when one or the other has a
bucket full problem with the existing algorithms.  It's been a couple
weeks since I was looking at the data - so results are not fresh in mind
especially since I was on PTO last week.

>> fail. Easy enough to reproduce by changing the virHashCreate call in
>> qemuBlockNodeNameGetBackingChain to use 53 instead of 50 (even 49 causes
>> failure). It doesn't matter if the test source also adjusts the initial
>> hash size.
>>
>> Of course this makes sense given that virHashCodeGen is the backend of
>> the virHashComputeKey algorithm that uses the return value %
>> table->size. So if "size" changes, then obviously order changes.
> 
> As you've said that is expected. I don't quite follow what is your
> point here.
> 

I was pointing out that by changing the size you get different ordered
results, nothing more, nothing less. It just so happens that 50 returns
this set of results, but 53 is different.

>> While I get the code bloat conundrum, but if the order doesn't matter
>> then I wonder what other variables could change that would affect this
>> test and whether we really want to be constantly altering the mocks
>> "just" to get the right answer for this test.
> 
> The order depends on the result of the hasing function, the number of
> buckets and the order you've added the entries. All of those is
> expected when dealing with a hash table.
> 
> After this last patch, you are guaranteed to have the hash function
> return deterministic results (even on big endian-boxes).
> 

Change the size to 53, rebuild, and run the test. It'll take a few
minutes, but I think you'd see a failure.

> The hash size is not really altered very often since you can't really
> see a measurable performance benefit in most cases. This is also
> strenghtened by the fact, that the hash function is randomized by a
> random number, so in real usage you won't be able to deterministically
> cause hash collisions.

True... at 50 with 4 elements you won't see a resize. You could possibly
have a table size of 1 in this instance and be fine, but that only works
well for the test! It's more problematic for real world because a table
size of 1 grows to 8, 64, etc.  Some would say the minimum table size
should be at least 8 if not the next prime, e.g. 11.

In the long run, UUID's are random, but "#blockN" values are far less
random - it's the human randomness factor. No one is going to create 100
randomly named domain names or volume names, so factoring in the less
than randomly named elements is what I've been pondering.

> 
> In the test code, the cases are added in a deterministic order. Since
> the hashing function in the tests is deterministic as well, the only
> thing that would influence the ordering is the bucket count. We don't
> modify them too often.
> 
> I don't see what would make us 

Re: [libvirt] [PATCH 3/3] tests: deterministichash: Make hash tables arch-independent

2017-08-15 Thread Daniel P. Berrange
On Tue, Aug 15, 2017 at 03:03:07PM +0200, Michal Privoznik wrote:
> On 08/15/2017 02:01 PM, Peter Krempa wrote:
> > On Mon, Aug 14, 2017 at 20:46:10 -0400, John Ferlan wrote:
> >> On 08/03/2017 04:50 AM, Peter Krempa wrote:
> > 
> > [ trimmed off-topic part ]
> > 
> >> NB: I didn't dig into the qemumonitorjsontest algorithm, but...
> >>
> >> While going through the common object changes, I ended up looking at and
> >> thinking about the hash table algorithms, initially it was just the
> >> "grow" algorithm as I think it is a bit "aggressive" (perhaps wasteful)
> >> since as soon as 1 bucket chain exceeds 8 elements, the table is resized
> >> by 8 unless the new size is 8 * 2048 - at which time no matter how full
> >> a bucket is we don't resize the table
> > 
> > Resizing the table is very expensive, so it makes sense to scale it
> > aggresively. At some point though it would take too much memory.
> > 
> > If some place in the code expects massive hash table, it can set the
> > hash table to > 2048 manually.
> > 
> >> The next thing I considered was using a prime number as the table bucket
> >> size and it seems by just doing that will cause qemumonitorjsontest to
> > 
> > What would be the benefit of such change? I don't really see how prime
> > numbers would improve anything performance-wise.
> 
> Because if your hash function is stupid it can cluster all the keys
> together. Primes, however, have way fewer divisors, therefore clustering
> is harder to achieve. Of course, you can construct a hash function so
> that it's utterly stupid (e.g. {return 4;}) and then all of our
> optimizations are thrown out of the window. But I believe that using
> prime numbers for table size is advised in the literature too.
> Apparently, wiki mentions this fact too:
> 
> https://en.wikipedia.org/wiki/Hash_table#Choosing_a_hash_function

If your hash function has a problem with clustering, then you really
should replace the hash function with a better one - which we already
did a few years back now :-)

Incidentally, we should replace our murmurhash funtion with siphash
at some point, because people have found attacks against murmurhash,
and so most devs have migrated over to siphash.

Regards,
Daniel

[1] 
https://emboss.github.io/blog/2012/12/14/breaking-murmur-hash-flooding-dos-reloaded/
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 3/3] tests: deterministichash: Make hash tables arch-independent

2017-08-15 Thread Peter Krempa
On Tue, Aug 15, 2017 at 15:03:07 +0200, Michal Privoznik wrote:
> On 08/15/2017 02:01 PM, Peter Krempa wrote:
> > On Mon, Aug 14, 2017 at 20:46:10 -0400, John Ferlan wrote:
> >> On 08/03/2017 04:50 AM, Peter Krempa wrote:

[...]

> >> The next thing I considered was using a prime number as the table bucket
> >> size and it seems by just doing that will cause qemumonitorjsontest to
> > 
> > What would be the benefit of such change? I don't really see how prime
> > numbers would improve anything performance-wise.
> 
> Because if your hash function is stupid it can cluster all the keys
> together. Primes, however, have way fewer divisors, therefore clustering
> is harder to achieve. Of course, you can construct a hash function so
> that it's utterly stupid (e.g. {return 4;}) and then all of our
> optimizations are thrown out of the window. But I believe that using
> prime numbers for table size is advised in the literature too.
> Apparently, wiki mentions this fact too:
> 
> https://en.wikipedia.org/wiki/Hash_table#Choosing_a_hash_function

Our hash function is quite complex. I doubt that you will be able to
achieve any measurable performance benefit by this change. Especially
due to the random factor we are using.

The hash function used in tests is dumb, but it's dumb on purpose. I
really doubt that messing with this is worth.


signature.asc
Description: PGP signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH] qemu: Fix bug assuming usage of default UUID for certificate passphrase

2017-08-15 Thread Michal Privoznik
On 07/21/2017 11:47 PM, John Ferlan wrote:
> If an environment specific _tls_x509_cert_dir is provided, then
> do not VIR_STRDUP the defaultTLSx509secretUUID as that would be
> for the "default" environment and not the vnc, spice, chardev, or
> migrate environments. If the environment needs a secret to decode
> it's certificate, then it must provide the secret. If the secrets
> happen to be the same, then configuration would use the same UUID
> as the default (but we cannot assume that nor can we assume that
> the secret would be necessary).
> 
> Signed-off-by: John Ferlan 
> ---
> 
>  While responding to a different patch today regarding Veritas and
>  usage of a default environment w/ or w/o secrets I realized that
>  the existing logic has a flaw in "assuming" that someone would want
>  to use the default secret. What if they defined their own environment
>  without a secret?  Then the code would create a secret object to pass
>  to QEMU which would think it needs to use it to decode the server
>  certificate (but it doesn't), so it would seemingly fail the start.
>  I assume based on the lack of complaints about this that everyone just
>  uses the default environment!
> 
>  src/qemu/qemu_conf.c | 10 +++---
>  1 file changed, 7 insertions(+), 3 deletions(-)

ACK

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 3/3] tests: deterministichash: Make hash tables arch-independent

2017-08-15 Thread Michal Privoznik
On 08/15/2017 02:01 PM, Peter Krempa wrote:
> On Mon, Aug 14, 2017 at 20:46:10 -0400, John Ferlan wrote:
>> On 08/03/2017 04:50 AM, Peter Krempa wrote:
> 
> [ trimmed off-topic part ]
> 
>> NB: I didn't dig into the qemumonitorjsontest algorithm, but...
>>
>> While going through the common object changes, I ended up looking at and
>> thinking about the hash table algorithms, initially it was just the
>> "grow" algorithm as I think it is a bit "aggressive" (perhaps wasteful)
>> since as soon as 1 bucket chain exceeds 8 elements, the table is resized
>> by 8 unless the new size is 8 * 2048 - at which time no matter how full
>> a bucket is we don't resize the table
> 
> Resizing the table is very expensive, so it makes sense to scale it
> aggresively. At some point though it would take too much memory.
> 
> If some place in the code expects massive hash table, it can set the
> hash table to > 2048 manually.
> 
>> The next thing I considered was using a prime number as the table bucket
>> size and it seems by just doing that will cause qemumonitorjsontest to
> 
> What would be the benefit of such change? I don't really see how prime
> numbers would improve anything performance-wise.

Because if your hash function is stupid it can cluster all the keys
together. Primes, however, have way fewer divisors, therefore clustering
is harder to achieve. Of course, you can construct a hash function so
that it's utterly stupid (e.g. {return 4;}) and then all of our
optimizations are thrown out of the window. But I believe that using
prime numbers for table size is advised in the literature too.
Apparently, wiki mentions this fact too:

https://en.wikipedia.org/wiki/Hash_table#Choosing_a_hash_function

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 3/3] tests: deterministichash: Make hash tables arch-independent

2017-08-15 Thread Peter Krempa
On Mon, Aug 14, 2017 at 20:46:10 -0400, John Ferlan wrote:
> On 08/03/2017 04:50 AM, Peter Krempa wrote:

[ trimmed off-topic part ]

> NB: I didn't dig into the qemumonitorjsontest algorithm, but...
> 
> While going through the common object changes, I ended up looking at and
> thinking about the hash table algorithms, initially it was just the
> "grow" algorithm as I think it is a bit "aggressive" (perhaps wasteful)
> since as soon as 1 bucket chain exceeds 8 elements, the table is resized
> by 8 unless the new size is 8 * 2048 - at which time no matter how full
> a bucket is we don't resize the table

Resizing the table is very expensive, so it makes sense to scale it
aggresively. At some point though it would take too much memory.

If some place in the code expects massive hash table, it can set the
hash table to > 2048 manually.

> The next thing I considered was using a prime number as the table bucket
> size and it seems by just doing that will cause qemumonitorjsontest to

What would be the benefit of such change? I don't really see how prime
numbers would improve anything performance-wise.

> fail. Easy enough to reproduce by changing the virHashCreate call in
> qemuBlockNodeNameGetBackingChain to use 53 instead of 50 (even 49 causes
> failure). It doesn't matter if the test source also adjusts the initial
> hash size.
> 
> Of course this makes sense given that virHashCodeGen is the backend of
> the virHashComputeKey algorithm that uses the return value %
> table->size. So if "size" changes, then obviously order changes.

As you've said that is expected. I don't quite follow what is your
point here.

> While I get the code bloat conundrum, but if the order doesn't matter
> then I wonder what other variables could change that would affect this
> test and whether we really want to be constantly altering the mocks
> "just" to get the right answer for this test.

The order depends on the result of the hasing function, the number of
buckets and the order you've added the entries. All of those is
expected when dealing with a hash table.

After this last patch, you are guaranteed to have the hash function
return deterministic results (even on big endian-boxes).

The hash size is not really altered very often since you can't really
see a measurable performance benefit in most cases. This is also
strenghtened by the fact, that the hash function is randomized by a
random number, so in real usage you won't be able to deterministically
cause hash collisions.

In the test code, the cases are added in a deterministic order. Since
the hashing function in the tests is deterministic as well, the only
thing that would influence the ordering is the bucket count. We don't
modify them too often.

I don't see what would make us alter mocks "just" to get the right
answer here. The reason for this patch was, that the hash function is
using bit operations and then treating the result as integers, which
obviously does not work deterministically between big and little endian
arches. Other than that, the test is now fully deterministic.

Peter


signature.asc
Description: PGP signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCHv2 0/3] exposing busy polling support for vhost-net

2017-08-15 Thread Daniel P. Berrange
On Mon, Jul 17, 2017 at 01:39:13PM +0200, Peter Krempa wrote:
> On Mon, Jul 17, 2017 at 07:27:24 -0400, sferd...@redhat.com wrote:
> > From: Sahid Orentino Ferdjaoui 
> > 
> > In version 2.7.0, QEMU introduced support of busy polling for
> > vhost-net [0]. To avoid paraphrasing original authors of that feature
> > and the purpose it I prefer to share a pointer [1].
> > 
> > This patch serie exposes throught the NIC driver-specific element a
> > new option 'poll_us'. That option is only available with the backend
> > driver 'vhost' and that because libvirt automatically fallback to QEMU
> > if the driver is not specified where that option is not available.
> > 
> > The option 'poll_us' takes a positive. 0 means that the option
> > is not going to be exposed.
> 
> We had a similar attempt to do this for disk polling, but that was
> rejected since it's not very straightforward for the users to tune this
> variable. I think this falls into the same category.
> 
> Here's the discussion for iothread polling:
> 
> https://www.redhat.com/archives/libvir-list/2017-February/msg01048.html

Yes, thanks for pointing that out. I do have the same objections to this
patch, as for the previous disk polling patch. I just don't think they
are practical for a appliction to use - they're too low level to expose
to sysadmins, and there's no obvious way for an application to pick
the right settings automatically. So I think we're best served by letting
QEMU pick defaults

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] libvirtd.conf: Drop max_requests

2017-08-15 Thread Michal Privoznik
Since its introduction in f61341173bdaa2e0 it was never
implemented nor there are plans to implement it. Drop it.

Signed-off-by: Michal Privoznik 
---
 daemon/libvirtd-config.c|  3 ---
 daemon/libvirtd-config.h|  1 -
 daemon/libvirtd.aug |  1 -
 daemon/libvirtd.conf| 14 ++
 daemon/test_libvirtd.aug.in |  1 -
 5 files changed, 2 insertions(+), 18 deletions(-)

diff --git a/daemon/libvirtd-config.c b/daemon/libvirtd-config.c
index db283a41f..19b3d168e 100644
--- a/daemon/libvirtd-config.c
+++ b/daemon/libvirtd-config.c
@@ -153,7 +153,6 @@ daemonConfigNew(bool privileged ATTRIBUTE_UNUSED)
 
 data->prio_workers = 5;
 
-data->max_requests = 20;
 data->max_client_requests = 5;
 
 data->audit_level = 1;
@@ -348,8 +347,6 @@ daemonConfigLoadOptions(struct daemonConfig *data,
 if (virConfGetValueUInt(conf, "prio_workers", >prio_workers) < 0)
 goto error;
 
-if (virConfGetValueUInt(conf, "max_requests", >max_requests) < 0)
-goto error;
 if (virConfGetValueUInt(conf, "max_client_requests", 
>max_client_requests) < 0)
 goto error;
 
diff --git a/daemon/libvirtd-config.h b/daemon/libvirtd-config.h
index 1edf5fadb..d618c9660 100644
--- a/daemon/libvirtd-config.h
+++ b/daemon/libvirtd-config.h
@@ -71,7 +71,6 @@ struct daemonConfig {
 
 unsigned int prio_workers;
 
-unsigned int max_requests;
 unsigned int max_client_requests;
 
 unsigned int log_level;
diff --git a/daemon/libvirtd.aug b/daemon/libvirtd.aug
index 24fdf445c..df310d876 100644
--- a/daemon/libvirtd.aug
+++ b/daemon/libvirtd.aug
@@ -60,7 +60,6 @@ module Libvirtd =
 | int_entry "max_clients"
 | int_entry "max_queued_clients"
 | int_entry "max_anonymous_clients"
-| int_entry "max_requests"
 | int_entry "max_client_requests"
 | int_entry "prio_workers"
 
diff --git a/daemon/libvirtd.conf b/daemon/libvirtd.conf
index e83e9a1c1..8a1b3a92d 100644
--- a/daemon/libvirtd.conf
+++ b/daemon/libvirtd.conf
@@ -301,20 +301,10 @@
 # (notably domainDestroy) can be executed in this pool.
 #prio_workers = 5
 
-# Total global limit on concurrent RPC calls. Should be
-# at least as large as max_workers. Beyond this, RPC requests
-# will be read into memory and queued. This directly impacts
-# memory usage, currently each request requires 256 KB of
-# memory. So by default up to 5 MB of memory is used
-#
-# XXX this isn't actually enforced yet, only the per-client
-# limit is used so far
-#max_requests = 20
-
 # Limit on concurrent requests from a single client
 # connection. To avoid one client monopolizing the server
-# this should be a small fraction of the global max_requests
-# and max_workers parameter
+# this should be a small fraction of the global max_workers
+# parameter.
 #max_client_requests = 5
 
 # Same processing controls, but this time for the admin interface.
diff --git a/daemon/test_libvirtd.aug.in b/daemon/test_libvirtd.aug.in
index 12009528f..b24b32e33 100644
--- a/daemon/test_libvirtd.aug.in
+++ b/daemon/test_libvirtd.aug.in
@@ -42,7 +42,6 @@ module Test_libvirtd =
 { "min_workers" = "5" }
 { "max_workers" = "20" }
 { "prio_workers" = "5" }
-{ "max_requests" = "20" }
 { "max_client_requests" = "5" }
 { "admin_min_workers" = "1" }
 { "admin_max_workers" = "5" }
-- 
2.13.0

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [dbus PATCH 0/7] additional cleanups and improvements

2017-08-15 Thread Pavel Hrdina
On Mon, Aug 14, 2017 at 09:40:14PM -0400, John Ferlan wrote:
> 
> 
> On 08/08/2017 11:25 AM, Pavel Hrdina wrote:
> > Pavel Hrdina (7):
> >   maint: cleanup gitignore
> >   maint: ignore dist tarball
> >   build: fix distcheck
> >   build: move test related bits to test/Makefile.am
> >   maint: move service file into data directory
> >   data: add system dbus service file
> >   data: add system dbus service policy configuration
> > 
> >  .gitignore  | 21 +++--
> >  Makefile.am |  6 +-
> >  configure.ac| 23 +++
> >  data/Makefile.am| 33 +
> >  data/session/org.libvirt.service.in |  3 +++
> >  data/system/org.libvirt.conf| 12 
> >  data/system/org.libvirt.service.in  |  4 
> >  libvirt-dbus.spec.in|  2 ++
> >  src/Makefile.am | 17 -
> >  src/org.libvirt.service.in  |  3 ---
> >  test/Makefile.am| 16 
> >  test/libvirttest.py |  2 +-
> >  12 files changed, 106 insertions(+), 36 deletions(-)
> >  create mode 100644 data/Makefile.am
> >  create mode 100644 data/session/org.libvirt.service.in
> >  create mode 100644 data/system/org.libvirt.conf
> >  create mode 100644 data/system/org.libvirt.service.in
> >  delete mode 100644 src/org.libvirt.service.in
> >  create mode 100644 test/Makefile.am
> > 
> 
> 
> You know... It might be nice to add something regarding how to build,
> test, use distcheck, etc. into HACKING.  I know I can never remember for
> -perl and I can always go there to cut-n-paste... Including running
> "run" from libvirt master tree use those bits.

It might be nice :) I'll add it to my TODO list.  There is no need to
have any "run" for libvirt-dbus.  The steps are simple, just running
"./autogen.sh && make && make check" but it makes sense to gave it in
HACKING.

> FWIW:
> Even after applying these and building I get:
> 
> $ git status
> On branch rvw-phrdina-dbus-more
> Your branch is ahead of 'master' by 7 commits.
>   (use "git push" to publish your local commits)
> Untracked files:
>   (use "git add ..." to include in what will be committed)
> 
>   src/org.libvirt.service
> 
> nothing added to commit but untracked files present (use "git add" to track)
> 
> So that may need to stay in patch 5 or somehow be magically removed if
> it exists since it's now moved/created elsewhere.

I guess that it could confuse someone that suddenly there is some
untracked file.  I'll keep the line in gitignore.

> Beyond that - things look OK to me. I can successfully build for each
> patch and they all look reasonable to me (although I'm far from an
> expert - I count on others for that!).
> 
> Reviewed-by: John Ferlan  

Thanks, I'll wait for few days and if nobody else objects I'll push it.

Pavel


signature.asc
Description: PGP signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [libvirt-php][PATCH] Generate AUTHORS file

2017-08-15 Thread Michal Privoznik
On 08/15/2017 10:39 AM, Ján Tomko wrote:
> On Tue, Aug 15, 2017 at 07:43:36AM +0200, Michal Privoznik wrote:
>> On 08/14/2017 05:42 PM, Ján Tomko wrote:
>>> On Sat, Aug 05, 2017 at 02:29:58PM +0200, Michal Privoznik wrote:
 +gen-AUTHORS:
 +$(AM_V_GEN)if test -d $(srcdir)/.git; then \
 +out="`cd $(srcdir) && git log --pretty=format:'%aN <%aE>' |
 sort -u | sed 's/^/\t/'`" && \
 +perl -p -e "s/#authorslist#// and print '$$out'" \
 +  < $(srcdir)/AUTHORS.in > $(distdir)/AUTHORS-tmp && \
 +mv -f $(distdir)/AUTHORS-tmp $(distdir)/AUTHORS ; \
 +fi
>>>
>>> What is the point of going through a temporary file and spawning one
>>> extra process on success? We use the pattern a lot in libvirt's Makefile
>>> and it just seems wasteful to me.
>>
>> What extra process do you have in mind?
>>
> 
> mv

Oh yeah. Well I think the idea is to replace AUTHORS iff we successfully
generated it. It doesn't bother me that much to rewrite it. Processes
come and go. But yet again, if you feel like this is a problem be my
guest a propose a patch.

> 
>>>
>>> Also, this probably won't work correctly with git work-trees, since
>>> those have .git as a regular file, not a directory. See:
>>> http://git.savannah.gnu.org/cgit/gnulib.git/commit/?id=e97dce1b84ab63
>>
>> Well, in that case it's broken in libvirt too. Feel free to fix it and
>> I'll cherry-pick the fix into -php then.
>>
> 
> I volunteer to cherry-pick it to libvirt, just in case you fix it in
> libvirt-php first.

I will not. I don't care that much to try to fix it. Be my guest.

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [libvirt-php][PATCH] Generate AUTHORS file

2017-08-15 Thread Ján Tomko

On Tue, Aug 15, 2017 at 07:43:36AM +0200, Michal Privoznik wrote:

On 08/14/2017 05:42 PM, Ján Tomko wrote:

On Sat, Aug 05, 2017 at 02:29:58PM +0200, Michal Privoznik wrote:

+gen-AUTHORS:
+$(AM_V_GEN)if test -d $(srcdir)/.git; then \
+out="`cd $(srcdir) && git log --pretty=format:'%aN <%aE>' |
sort -u | sed 's/^/\t/'`" && \
+perl -p -e "s/#authorslist#// and print '$$out'" \
+  < $(srcdir)/AUTHORS.in > $(distdir)/AUTHORS-tmp && \
+mv -f $(distdir)/AUTHORS-tmp $(distdir)/AUTHORS ; \
+fi


What is the point of going through a temporary file and spawning one
extra process on success? We use the pattern a lot in libvirt's Makefile
and it just seems wasteful to me.


What extra process do you have in mind?



mv



Also, this probably won't work correctly with git work-trees, since
those have .git as a regular file, not a directory. See:
http://git.savannah.gnu.org/cgit/gnulib.git/commit/?id=e97dce1b84ab63


Well, in that case it's broken in libvirt too. Feel free to fix it and
I'll cherry-pick the fix into -php then.



I volunteer to cherry-pick it to libvirt, just in case you fix it in
libvirt-php first.

Jan


signature.asc
Description: Digital signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH v2 0/2] Have qemuDomainUndefineFlags grab a job

2017-08-15 Thread Michal Privoznik
This is v2 to:

https://www.redhat.com/archives/libvir-list/2017-August/msg00222.html

diff to v1:
- Split qemuDomainRemoveInactive into two functions so that its safe to call it
  from places where we already have a job.

Michal Privoznik (2):
  qemu: Introduce and use qemuDomainRemoveInactiveJob
  qemuDomainUndefineFlags: Grab QEMU_JOB_MODIFY

 src/qemu/qemu_domain.c| 36 +++-
 src/qemu/qemu_domain.h|  3 +++
 src/qemu/qemu_driver.c| 47 ++-
 src/qemu/qemu_migration.c | 10 +-
 src/qemu/qemu_process.c   | 10 +-
 5 files changed, 66 insertions(+), 40 deletions(-)

-- 
2.13.0

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v2 1/2] qemu: Introduce and use qemuDomainRemoveInactiveJob

2017-08-15 Thread Michal Privoznik
At some places we either already have synchronous job or we just
released it. Also, some APIs might want to use this code without
having to release their job. Anyway, the job acquire code is
moved out to qemuDomainRemoveInactiveJob so that
qemuDomainRemoveInactive does just what it promises.

Signed-off-by: Michal Privoznik 
---
 src/qemu/qemu_domain.c| 36 +++-
 src/qemu/qemu_domain.h|  3 +++
 src/qemu/qemu_driver.c| 26 +-
 src/qemu/qemu_migration.c | 10 +-
 src/qemu/qemu_process.c   | 10 +-
 5 files changed, 53 insertions(+), 32 deletions(-)

diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 40608554c..2b19f841c 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -5187,14 +5187,16 @@ qemuDomainSnapshotDiscardAllMetadata(virQEMUDriverPtr 
driver,
 return rem.err;
 }
 
-/*
+
+/**
+ * qemuDomainRemoveInactive:
+ *
  * The caller must hold a lock the vm.
  */
 void
 qemuDomainRemoveInactive(virQEMUDriverPtr driver,
  virDomainObjPtr vm)
 {
-bool haveJob = true;
 char *snapDir;
 virQEMUDriverConfigPtr cfg;
 
@@ -5205,9 +5207,6 @@ qemuDomainRemoveInactive(virQEMUDriverPtr driver,
 
 cfg = virQEMUDriverGetConfig(driver);
 
-if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0)
-haveJob = false;
-
 /* Remove any snapshot metadata prior to removing the domain */
 if (qemuDomainSnapshotDiscardAllMetadata(driver, vm) < 0) {
 VIR_WARN("unable to remove all snapshots for domain %s",
@@ -5240,13 +5239,32 @@ qemuDomainRemoveInactive(virQEMUDriverPtr driver,
  */
 virObjectLock(vm);
 virObjectUnref(cfg);
-
-if (haveJob)
-qemuDomainObjEndJob(driver, vm);
-
 virObjectUnref(vm);
 }
 
+
+/**
+ * qemuDomainRemoveInactiveJob:
+ *
+ * Just like qemuDomainRemoveInactive but it tries to grab a
+ * QEMU_JOB_MODIFY before. If it doesn't succeed in grabbing the
+ * job the control carries with qemuDomainRemoveInactive though.
+ */
+void
+qemuDomainRemoveInactiveJob(virQEMUDriverPtr driver,
+virDomainObjPtr vm)
+{
+bool haveJob;
+
+haveJob = qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) >= 0;
+
+qemuDomainRemoveInactive(driver, vm);
+
+if (haveJob)
+qemuDomainObjEndJob(driver, vm);
+}
+
+
 void
 qemuDomainSetFakeReboot(virQEMUDriverPtr driver,
 virDomainObjPtr vm,
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
index 4c9050aff..f93b09b69 100644
--- a/src/qemu/qemu_domain.h
+++ b/src/qemu/qemu_domain.h
@@ -611,6 +611,9 @@ int qemuDomainSnapshotDiscardAllMetadata(virQEMUDriverPtr 
driver,
 void qemuDomainRemoveInactive(virQEMUDriverPtr driver,
   virDomainObjPtr vm);
 
+void qemuDomainRemoveInactiveJob(virQEMUDriverPtr driver,
+ virDomainObjPtr vm);
+
 void qemuDomainSetFakeReboot(virQEMUDriverPtr driver,
  virDomainObjPtr vm,
  bool value);
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index e9f07c6e7..94c9c003f 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -1779,7 +1779,7 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr 
conn,
 def = NULL;
 
 if (qemuProcessBeginJob(driver, vm, VIR_DOMAIN_JOB_OPERATION_START) < 0) {
-qemuDomainRemoveInactive(driver, vm);
+qemuDomainRemoveInactiveJob(driver, vm);
 goto cleanup;
 }
 
@@ -1789,7 +1789,7 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr 
conn,
  start_flags) < 0) {
 virDomainAuditStart(vm, "booted", false);
 qemuProcessEndJob(driver, vm);
-qemuDomainRemoveInactive(driver, vm);
+qemuDomainRemoveInactiveJob(driver, vm);
 goto cleanup;
 }
 
@@ -2259,9 +2259,9 @@ qemuDomainDestroyFlags(virDomainPtr dom,
 
 ret = 0;
  endjob:
-qemuDomainObjEndJob(driver, vm);
 if (ret == 0)
 qemuDomainRemoveInactive(driver, vm);
+qemuDomainObjEndJob(driver, vm);
 
  cleanup:
 virDomainObjEndAPI();
@@ -3396,7 +3396,7 @@ qemuDomainSaveInternal(virQEMUDriverPtr driver, 
virDomainPtr dom,
 }
 qemuDomainObjEndAsyncJob(driver, vm);
 if (ret == 0)
-qemuDomainRemoveInactive(driver, vm);
+qemuDomainRemoveInactiveJob(driver, vm);
 
  cleanup:
 virObjectUnref(cookie);
@@ -3916,7 +3916,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom,
 
 qemuDomainObjEndAsyncJob(driver, vm);
 if (ret == 0 && flags & VIR_DUMP_CRASH)
-qemuDomainRemoveInactive(driver, vm);
+qemuDomainRemoveInactiveJob(driver, vm);
 
  cleanup:
 virDomainObjEndAPI();
@@ -4227,7 +4227,7 @@ processGuestPanicEvent(virQEMUDriverPtr driver,
  endjob:
 qemuDomainObjEndAsyncJob(driver, vm);
 if (removeInactive)
-qemuDomainRemoveInactive(driver, vm);
+

[libvirt] [PATCH v2 2/2] qemuDomainUndefineFlags: Grab QEMU_JOB_MODIFY

2017-08-15 Thread Michal Privoznik
This API is definitely modifying state of @vm. Therefore it
should grab a job.

Signed-off-by: Michal Privoznik 
---
 src/qemu/qemu_driver.c | 25 +++--
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 94c9c003f..3f7909c21 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -7325,10 +7325,13 @@ qemuDomainUndefineFlags(virDomainPtr dom,
 if (virDomainUndefineFlagsEnsureACL(dom->conn, vm->def) < 0)
 goto cleanup;
 
+if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0)
+goto cleanup;
+
 if (!vm->persistent) {
 virReportError(VIR_ERR_OPERATION_INVALID,
"%s", _("cannot undefine transient domain"));
-goto cleanup;
+goto endjob;
 }
 
 if (!virDomainObjIsActive(vm) &&
@@ -7338,15 +7341,15 @@ qemuDomainUndefineFlags(virDomainPtr dom,
_("cannot delete inactive domain with %d "
  "snapshots"),
nsnapshots);
-goto cleanup;
+goto endjob;
 }
 if (qemuDomainSnapshotDiscardAllMetadata(driver, vm) < 0)
-goto cleanup;
+goto endjob;
 }
 
 name = qemuDomainManagedSavePath(driver, vm);
 if (name == NULL)
-goto cleanup;
+goto endjob;
 
 if (virFileExists(name)) {
 if (flags & VIR_DOMAIN_UNDEFINE_MANAGED_SAVE) {
@@ -7354,13 +7357,13 @@ qemuDomainUndefineFlags(virDomainPtr dom,
 virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
_("Failed to remove domain managed "
  "save image"));
-goto cleanup;
+goto endjob;
 }
 } else {
 virReportError(VIR_ERR_OPERATION_INVALID, "%s",
_("Refusing to undefine while domain managed "
  "save image exists"));
-goto cleanup;
+goto endjob;
 }
 }
 
@@ -7372,17 +7375,17 @@ qemuDomainUndefineFlags(virDomainPtr dom,
 virReportSystemError(errno,
  _("failed to remove nvram: %s"),
  vm->def->os.loader->nvram);
-goto cleanup;
+goto endjob;
 }
 } else if (!(flags & VIR_DOMAIN_UNDEFINE_KEEP_NVRAM)) {
 virReportError(VIR_ERR_OPERATION_INVALID, "%s",
_("cannot delete inactive domain with nvram"));
-goto cleanup;
+goto endjob;
 }
 }
 
 if (virDomainDeleteConfig(cfg->configDir, cfg->autostartDir, vm) < 0)
-goto cleanup;
+goto endjob;
 
 event = virDomainEventLifecycleNewFromObj(vm,
  VIR_DOMAIN_EVENT_UNDEFINED,
@@ -7396,9 +7399,11 @@ qemuDomainUndefineFlags(virDomainPtr dom,
  */
 vm->persistent = 0;
 if (!virDomainObjIsActive(vm))
-qemuDomainRemoveInactiveJob(driver, vm);
+qemuDomainRemoveInactive(driver, vm);
 
 ret = 0;
+ endjob:
+qemuDomainObjEndJob(driver, vm);
 
  cleanup:
 VIR_FREE(name);
-- 
2.13.0

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 1/2] qemuDomainUndefineFlags: Grab QEMU_JOB_MODIFY

2017-08-15 Thread Michal Privoznik
On 08/15/2017 03:00 AM, John Ferlan wrote:
> 
> 
> On 08/07/2017 08:20 AM, Michal Privoznik wrote:
>> This API is definitely modifying state of @vm. Therefore it
>> should grab a job.
>>
>> Signed-off-by: Michal Privoznik 
>> ---
>>  src/qemu/qemu_driver.c | 23 ++-
>>  1 file changed, 14 insertions(+), 9 deletions(-)
>>
> 
> Not sure which of or if this patch from the two undefine series is
> active, but I wonder how this "interacts or intersects" with:
> 
> https://www.redhat.com/archives/libvir-list/2017-July/msg00921.html

Oh right. This patch of mine can resolve the issue that you're linking.
The thing is, in qemuDomainCreateWithFlags() an async job is grabbed
(VIR_DOMAIN_JOB_OPERATION_START) and as such only certain type of sync
jobs is allowed. In this specific case just destroy is permitted.
Therefore, if undefine would grab a modify type of job it would be
suspended as long as the async job is set on the domain.

Good point! Thanks for catching that.

> 
> which was posted late last month... I looked at it, but really hadn't
> dug into yet.  The summary of the linked patch is it's possible to run
> an undefine while a define is occurring because locks are dropped during
> launch processing. However, if there's a job, then wouldn't that mean
> it'd be less likely or impossible to allow that undefine while the
> define job was happening?  Of course as Martin notes RemoveInactive
> cannot be run with a job, but the problem from the other patch is that
> the RemoveInactive is at least partially successful.
> 
> I guess I just wanted to be sure to note the link between the two just
> in case you were actively still thinking about this one (and had better
> ideas for the other patch where I went with the first thing that came to
> mind w/r/t using a flag).

Yeah, I'm gonna rework this one and resend. We can clearly see that
there's no excuse for an API to not grab a job!

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list