Re: [PATCH 1/3] virthreadpool: Allow setting identity for workers

2021-10-27 Thread Michal Prívozník
On 10/26/21 5:06 PM, Daniel P. Berrangé wrote:
> On Tue, Oct 26, 2021 at 04:49:55PM +0200, Michal Prívozník wrote:
>> On 10/26/21 4:39 PM, Daniel P. Berrangé wrote:
>>> On Tue, Oct 26, 2021 at 12:17:06PM +0200, Michal Privoznik wrote:
 In some cases the worker func running inside the pool may rely on
 virIdentity. While worker func could check for identity and set
 one it is not optimal - it may not have access to the identity of
 the thread creating the pool and thus would have to call
 virIdentityGetSystem(). Allow passing identity when creating the
 pool.
>>>
>>> I wonder if we should have an identity passed via virThreadPoolSendJob,
>>> so whatever queues the job can preserve its identity ?
>>
>> I thought about this too, but the threat that's doing that doesn't have
>> an identity set either:
>>
>> #0  virThreadPoolSendJob (pool=0x7f2ff010a370, priority=0, 
>> jobData=0x7f2fdc0b1f10) at ../src/util/virthreadpool.c:390
>> #1  0x7f30301f5240 in qemuProcessEventSubmit (driver=0x7f2ff0022370, 
>> event=0x7f3012ffc880) at ../src/qemu/qemu_process.c:300
>> #2  0x7f30301f8d13 in qemuProcessHandleSerialChanged 
>> (mon=0x7f2ff00295d0, vm=0x7f2ff0037830, devAlias=0x7f2fdc12f9e0 "channel0", 
>> connected=false, opaque=0x7f2ff0022370) at ../src/qemu/qemu_process.c:1540
>> #3  0x7f30301c37f2 in qemuMonitorEmitSerialChange (mon=0x7f2ff00295d0, 
>> devAlias=0x7f2fdc12f9e0 "channel0", connected=false) at 
>> ../src/qemu/qemu_monitor.c:1373
>> #4  0x7f30301d48c8 in qemuMonitorJSONHandleSerialChange 
>> (mon=0x7f2ff00295d0, data=0x7f2fdc0b2b40) at 
>> ../src/qemu/qemu_monitor_json.c:1145
>> #5  0x7f30301d1725 in qemuMonitorJSONIOProcessEvent (mon=0x7f2ff00295d0, 
>> obj=0x7f2fdc0b2770) at ../src/qemu/qemu_monitor_json.c:208
>> #6  0x7f30301d1933 in qemuMonitorJSONIOProcessLine (mon=0x7f2ff00295d0, 
>> line=0x7f2fdc0b4cd0 "{\"timestamp\": {\"seconds\": 1635259527, 
>> \"microseconds\": 380958}, \"event\": \"VSERPORT_CHANGE\", \"data\": 
>> {\"open\": false, \"id\": \"channel0\"}}", msg=0x0) at 
>> ../src/qemu/qemu_monitor_json.c:236
>> #7  0x7f30301d1bcc in qemuMonitorJSONIOProcess (mon=0x7f2ff00295d0, 
>> data=0x7f2fdc12f590 "{\"timestamp\": {\"seconds\": 1635259527, 
>> \"microseconds\": 380958}, \"event\": \"VSERPORT_CHANGE\", \"data\": 
>> {\"open\": false, \"id\": \"channel0\"}}\r\n", len=135, msg=0x0) at 
>> ../src/qemu/qemu_monitor_json.c:274
>> #8  0x7f30301c0400 in qemuMonitorIOProcess (mon=0x7f2ff00295d0) at 
>> ../src/qemu/qemu_monitor.c:347
>> #9  0x7f30301c0c80 in qemuMonitorIO (socket=0x7f2ff0017280, cond=0, 
>> opaque=0x7f2ff00295d0) at ../src/qemu/qemu_monitor.c:573
>> #10 0x7f3036ba8c77 in socket_source_dispatch () at 
>> /usr/lib64/libgio-2.0.so.0
>> #11 0x7f3036d8b9d0 in g_main_context_dispatch () at 
>> /usr/lib64/libglib-2.0.so.0
>> #12 0x7f3036d8bd78 in g_main_context_iterate.constprop () at 
>> /usr/lib64/libglib-2.0.so.0
>> #13 0x7f3036d8c06b in g_main_loop_run () at /usr/lib64/libglib-2.0.so.0
>> #14 0x7f3036f81d7b in virEventThreadWorker (opaque=0x7f2ff00b10c0) at 
>> ../src/util/vireventthread.c:124
>> #15 0x7f3036db52bd in g_thread_proxy () at /usr/lib64/libglib-2.0.so.0
>> #16 0x7f3036235e3e in start_thread () at /lib64/libpthread.so.0
>> #17 0x7f30369ffd6f in clone () at /lib64/libc.so.6
>>
>> virThreadPoolSendJob 19 $ call virIdentityGetCurrent()
>> $1 = 0x0
> 
> Hmm, yes, because that's from the event loop thread.
> 
> This actually makes me wonder if we actually need the worker pool at all
> here. The worker pool is global to the driver, so events for all VMs are
> being processed in the pool and so will contend for resource.
> 
> This made sense when all monitor I/O was procssed by the main event
> loop thread, but now we have a per-VM event loop thread. Seems like
> we could just process events directly in the per-VM thread potentially,
> unless something they do synchronously needs to do more monitor I/O

Some of them do. They do acquire modify job which may result in
deadlock. For instance, a thread acquires a job, proceeds to monitor and
as soon as it tries to send something an event is delivered. Let it be
an event that needs modify job. But such job can't be acquired really
because there's the thread that's trying to talk on monitor.

> 
> None the less, that's too big of a change todo in a hurry, so lets
> go with your patches here.

Okay, pushed. Thanks.

Michal



Re: [PATCH 1/3] virthreadpool: Allow setting identity for workers

2021-10-26 Thread Daniel P . Berrangé
On Tue, Oct 26, 2021 at 04:49:55PM +0200, Michal Prívozník wrote:
> On 10/26/21 4:39 PM, Daniel P. Berrangé wrote:
> > On Tue, Oct 26, 2021 at 12:17:06PM +0200, Michal Privoznik wrote:
> >> In some cases the worker func running inside the pool may rely on
> >> virIdentity. While worker func could check for identity and set
> >> one it is not optimal - it may not have access to the identity of
> >> the thread creating the pool and thus would have to call
> >> virIdentityGetSystem(). Allow passing identity when creating the
> >> pool.
> > 
> > I wonder if we should have an identity passed via virThreadPoolSendJob,
> > so whatever queues the job can preserve its identity ?
> 
> I thought about this too, but the threat that's doing that doesn't have
> an identity set either:
> 
> #0  virThreadPoolSendJob (pool=0x7f2ff010a370, priority=0, 
> jobData=0x7f2fdc0b1f10) at ../src/util/virthreadpool.c:390
> #1  0x7f30301f5240 in qemuProcessEventSubmit (driver=0x7f2ff0022370, 
> event=0x7f3012ffc880) at ../src/qemu/qemu_process.c:300
> #2  0x7f30301f8d13 in qemuProcessHandleSerialChanged (mon=0x7f2ff00295d0, 
> vm=0x7f2ff0037830, devAlias=0x7f2fdc12f9e0 "channel0", connected=false, 
> opaque=0x7f2ff0022370) at ../src/qemu/qemu_process.c:1540
> #3  0x7f30301c37f2 in qemuMonitorEmitSerialChange (mon=0x7f2ff00295d0, 
> devAlias=0x7f2fdc12f9e0 "channel0", connected=false) at 
> ../src/qemu/qemu_monitor.c:1373
> #4  0x7f30301d48c8 in qemuMonitorJSONHandleSerialChange 
> (mon=0x7f2ff00295d0, data=0x7f2fdc0b2b40) at 
> ../src/qemu/qemu_monitor_json.c:1145
> #5  0x7f30301d1725 in qemuMonitorJSONIOProcessEvent (mon=0x7f2ff00295d0, 
> obj=0x7f2fdc0b2770) at ../src/qemu/qemu_monitor_json.c:208
> #6  0x7f30301d1933 in qemuMonitorJSONIOProcessLine (mon=0x7f2ff00295d0, 
> line=0x7f2fdc0b4cd0 "{\"timestamp\": {\"seconds\": 1635259527, 
> \"microseconds\": 380958}, \"event\": \"VSERPORT_CHANGE\", \"data\": 
> {\"open\": false, \"id\": \"channel0\"}}", msg=0x0) at 
> ../src/qemu/qemu_monitor_json.c:236
> #7  0x7f30301d1bcc in qemuMonitorJSONIOProcess (mon=0x7f2ff00295d0, 
> data=0x7f2fdc12f590 "{\"timestamp\": {\"seconds\": 1635259527, 
> \"microseconds\": 380958}, \"event\": \"VSERPORT_CHANGE\", \"data\": 
> {\"open\": false, \"id\": \"channel0\"}}\r\n", len=135, msg=0x0) at 
> ../src/qemu/qemu_monitor_json.c:274
> #8  0x7f30301c0400 in qemuMonitorIOProcess (mon=0x7f2ff00295d0) at 
> ../src/qemu/qemu_monitor.c:347
> #9  0x7f30301c0c80 in qemuMonitorIO (socket=0x7f2ff0017280, cond=0, 
> opaque=0x7f2ff00295d0) at ../src/qemu/qemu_monitor.c:573
> #10 0x7f3036ba8c77 in socket_source_dispatch () at 
> /usr/lib64/libgio-2.0.so.0
> #11 0x7f3036d8b9d0 in g_main_context_dispatch () at 
> /usr/lib64/libglib-2.0.so.0
> #12 0x7f3036d8bd78 in g_main_context_iterate.constprop () at 
> /usr/lib64/libglib-2.0.so.0
> #13 0x7f3036d8c06b in g_main_loop_run () at /usr/lib64/libglib-2.0.so.0
> #14 0x7f3036f81d7b in virEventThreadWorker (opaque=0x7f2ff00b10c0) at 
> ../src/util/vireventthread.c:124
> #15 0x7f3036db52bd in g_thread_proxy () at /usr/lib64/libglib-2.0.so.0
> #16 0x7f3036235e3e in start_thread () at /lib64/libpthread.so.0
> #17 0x7f30369ffd6f in clone () at /lib64/libc.so.6
> 
> virThreadPoolSendJob 19 $ call virIdentityGetCurrent()
> $1 = 0x0

Hmm, yes, because that's from the event loop thread.

This actually makes me wonder if we actually need the worker pool at all
here. The worker pool is global to the driver, so events for all VMs are
being processed in the pool and so will contend for resource.

This made sense when all monitor I/O was procssed by the main event
loop thread, but now we have a per-VM event loop thread. Seems like
we could just process events directly in the per-VM thread potentially,
unless something they do synchronously needs to do more monitor I/O

None the less, that's too big of a change todo in a hurry, so lets
go with your patches here.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH 1/3] virthreadpool: Allow setting identity for workers

2021-10-26 Thread Michal Prívozník
On 10/26/21 4:39 PM, Daniel P. Berrangé wrote:
> On Tue, Oct 26, 2021 at 12:17:06PM +0200, Michal Privoznik wrote:
>> In some cases the worker func running inside the pool may rely on
>> virIdentity. While worker func could check for identity and set
>> one it is not optimal - it may not have access to the identity of
>> the thread creating the pool and thus would have to call
>> virIdentityGetSystem(). Allow passing identity when creating the
>> pool.
> 
> I wonder if we should have an identity passed via virThreadPoolSendJob,
> so whatever queues the job can preserve its identity ?

I thought about this too, but the threat that's doing that doesn't have
an identity set either:

#0  virThreadPoolSendJob (pool=0x7f2ff010a370, priority=0, 
jobData=0x7f2fdc0b1f10) at ../src/util/virthreadpool.c:390
#1  0x7f30301f5240 in qemuProcessEventSubmit (driver=0x7f2ff0022370, 
event=0x7f3012ffc880) at ../src/qemu/qemu_process.c:300
#2  0x7f30301f8d13 in qemuProcessHandleSerialChanged (mon=0x7f2ff00295d0, 
vm=0x7f2ff0037830, devAlias=0x7f2fdc12f9e0 "channel0", connected=false, 
opaque=0x7f2ff0022370) at ../src/qemu/qemu_process.c:1540
#3  0x7f30301c37f2 in qemuMonitorEmitSerialChange (mon=0x7f2ff00295d0, 
devAlias=0x7f2fdc12f9e0 "channel0", connected=false) at 
../src/qemu/qemu_monitor.c:1373
#4  0x7f30301d48c8 in qemuMonitorJSONHandleSerialChange 
(mon=0x7f2ff00295d0, data=0x7f2fdc0b2b40) at 
../src/qemu/qemu_monitor_json.c:1145
#5  0x7f30301d1725 in qemuMonitorJSONIOProcessEvent (mon=0x7f2ff00295d0, 
obj=0x7f2fdc0b2770) at ../src/qemu/qemu_monitor_json.c:208
#6  0x7f30301d1933 in qemuMonitorJSONIOProcessLine (mon=0x7f2ff00295d0, 
line=0x7f2fdc0b4cd0 "{\"timestamp\": {\"seconds\": 1635259527, 
\"microseconds\": 380958}, \"event\": \"VSERPORT_CHANGE\", \"data\": {\"open\": 
false, \"id\": \"channel0\"}}", msg=0x0) at ../src/qemu/qemu_monitor_json.c:236
#7  0x7f30301d1bcc in qemuMonitorJSONIOProcess (mon=0x7f2ff00295d0, 
data=0x7f2fdc12f590 "{\"timestamp\": {\"seconds\": 1635259527, 
\"microseconds\": 380958}, \"event\": \"VSERPORT_CHANGE\", \"data\": {\"open\": 
false, \"id\": \"channel0\"}}\r\n", len=135, msg=0x0) at 
../src/qemu/qemu_monitor_json.c:274
#8  0x7f30301c0400 in qemuMonitorIOProcess (mon=0x7f2ff00295d0) at 
../src/qemu/qemu_monitor.c:347
#9  0x7f30301c0c80 in qemuMonitorIO (socket=0x7f2ff0017280, cond=0, 
opaque=0x7f2ff00295d0) at ../src/qemu/qemu_monitor.c:573
#10 0x7f3036ba8c77 in socket_source_dispatch () at 
/usr/lib64/libgio-2.0.so.0
#11 0x7f3036d8b9d0 in g_main_context_dispatch () at 
/usr/lib64/libglib-2.0.so.0
#12 0x7f3036d8bd78 in g_main_context_iterate.constprop () at 
/usr/lib64/libglib-2.0.so.0
#13 0x7f3036d8c06b in g_main_loop_run () at /usr/lib64/libglib-2.0.so.0
#14 0x7f3036f81d7b in virEventThreadWorker (opaque=0x7f2ff00b10c0) at 
../src/util/vireventthread.c:124
#15 0x7f3036db52bd in g_thread_proxy () at /usr/lib64/libglib-2.0.so.0
#16 0x7f3036235e3e in start_thread () at /lib64/libpthread.so.0
#17 0x7f30369ffd6f in clone () at /lib64/libc.so.6

virThreadPoolSendJob 19 $ call virIdentityGetCurrent()
$1 = 0x0


Michal



Re: [PATCH 1/3] virthreadpool: Allow setting identity for workers

2021-10-26 Thread Daniel P . Berrangé
On Tue, Oct 26, 2021 at 12:17:06PM +0200, Michal Privoznik wrote:
> In some cases the worker func running inside the pool may rely on
> virIdentity. While worker func could check for identity and set
> one it is not optimal - it may not have access to the identity of
> the thread creating the pool and thus would have to call
> virIdentityGetSystem(). Allow passing identity when creating the
> pool.

I wonder if we should have an identity passed via virThreadPoolSendJob,
so whatever queues the job can preserve its identity ?

> 
> Signed-off-by: Michal Privoznik 
> ---
>  src/nwfilter/nwfilter_dhcpsnoop.c |  1 +
>  src/qemu/qemu_driver.c|  4 +++-
>  src/rpc/virnetserver.c|  1 +
>  src/util/virthreadpool.c  | 12 
>  src/util/virthreadpool.h  | 12 +++-
>  5 files changed, 24 insertions(+), 6 deletions(-)
> 
> diff --git a/src/nwfilter/nwfilter_dhcpsnoop.c 
> b/src/nwfilter/nwfilter_dhcpsnoop.c
> index 4b62a7b661..b0297f0741 100644
> --- a/src/nwfilter/nwfilter_dhcpsnoop.c
> +++ b/src/nwfilter/nwfilter_dhcpsnoop.c
> @@ -1333,6 +1333,7 @@ virNWFilterDHCPSnoopThread(void *req0)
>  worker = virThreadPoolNewFull(1, 1, 0,
>virNWFilterDHCPDecodeWorker,
>"dhcp-decode",
> +  NULL,
>req);
>  }
>  
> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
> index 6997dc7dea..a8bf0ecc6f 100644
> --- a/src/qemu/qemu_driver.c
> +++ b/src/qemu/qemu_driver.c
> @@ -914,7 +914,9 @@ qemuStateInitialize(bool privileged,
>   * running domains since there might occur some QEMU monitor
>   * events that will be dispatched to the worker pool */
>  qemu_driver->workerPool = virThreadPoolNewFull(0, 1, 0, 
> qemuProcessEventHandler,
> -   "qemu-event", 
> qemu_driver);
> +   "qemu-event",
> +   NULL,
> +   qemu_driver);
>  if (!qemu_driver->workerPool)
>  goto error;
>  
> diff --git a/src/rpc/virnetserver.c b/src/rpc/virnetserver.c
> index dc8f32b095..c7b4939398 100644
> --- a/src/rpc/virnetserver.c
> +++ b/src/rpc/virnetserver.c
> @@ -378,6 +378,7 @@ virNetServer *virNetServerNew(const char *name,
>priority_workers,
>virNetServerHandleJob,
>"rpc-worker",
> +  NULL,
>srv)))
>  goto error;
>  
> diff --git a/src/util/virthreadpool.c b/src/util/virthreadpool.c
> index 92b7cac286..7bf4333885 100644
> --- a/src/util/virthreadpool.c
> +++ b/src/util/virthreadpool.c
> @@ -55,6 +55,8 @@ struct _virThreadPool {
>  virThreadPoolJobList jobList;
>  size_t jobQueueDepth;
>  
> +virIdentity *identity;
> +
>  virMutex mutex;
>  virCond cond;
>  virCond quit_cond;
> @@ -99,6 +101,9 @@ static void virThreadPoolWorker(void *opaque)
>  
>  virMutexLock(>mutex);
>  
> +if (pool->identity)
> +virIdentitySetCurrent(pool->identity);
> +
>  while (1) {
>  /* In order to support async worker termination, we need ensure that
>   * both busy and free workers know if they need to terminated. Thus,
> @@ -219,6 +224,7 @@ virThreadPoolNewFull(size_t minWorkers,
>   size_t prioWorkers,
>   virThreadPoolJobFunc func,
>   const char *name,
> + virIdentity *identity,
>   void *opaque)
>  {
>  virThreadPool *pool;
> @@ -234,6 +240,9 @@ virThreadPoolNewFull(size_t minWorkers,
>  pool->jobName = name;
>  pool->jobOpaque = opaque;
>  
> +if (identity)
> +pool->identity = g_object_ref(identity);
> +
>  if (virMutexInit(>mutex) < 0)
>  goto error;
>  if (virCondInit(>cond) < 0)
> @@ -300,6 +309,9 @@ void virThreadPoolFree(virThreadPool *pool)
>  virMutexLock(>mutex);
>  virThreadPoolDrainLocked(pool);
>  
> +if (pool->identity)
> +g_object_unref(pool->identity);
> +
>  g_free(pool->workers);
>  virMutexUnlock(>mutex);
>  virMutexDestroy(>mutex);
> diff --git a/src/util/virthreadpool.h b/src/util/virthreadpool.h
> index 619d128e9a..c6b9f31916 100644
> --- a/src/util/virthreadpool.h
> +++ b/src/util/virthreadpool.h
> @@ -22,17 +22,19 @@
>  #pragma once
>  
>  #include "internal.h"
> +#include "viridentity.h"
>  
>  typedef struct _virThreadPool virThreadPool;
>  
>  typedef void (*virThreadPoolJobFunc)(void *jobdata, void *opaque);
>  
>  virThreadPool *virThreadPoolNewFull(size_t minWorkers,
> -  size_t maxWorkers,
> -