e transaction
> callbacks can be changed, too.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Emanuele Giuseppe Esposito
Am 17/08/2023 um 14:50 schrieb Kevin Wolf:
> Don't assume specific parameter names like 'bs' or 'blk' in the
> generated code, but use the actual name.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Emanuele Giuseppe Esposito
Am 17/08/2023 um 14:50 schrieb Kevin Wolf:
> The function reads the parents list, so it needs to hold the graph lock.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Emanuele Giuseppe Esposito
in Wolf
Reviewed-by: Emanuele Giuseppe Esposito
y can't call functions that take it
> internally.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Emanuele Giuseppe Esposito
y can't call functions that take it internally.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Emanuele Giuseppe Esposito
the same functionality that we
want.
Signed-off-by: Emanuele Giuseppe Esposito
---
include/qemu/coroutine.h | 10 ++
util/qemu-coroutine-lock.c | 26 ++
2 files changed, 20 insertions(+), 16 deletions(-)
diff --git a/include/qemu/coroutine.h b/include/qemu
Similar to the implementation in lockable.h, implement macros to
automatically take and release the rdlock.
Create the empty GraphLockable struct only to use it as a type for
G_DEFINE_AUTOPTR_CLEANUP_FUNC.
Signed-off-by: Emanuele Giuseppe Esposito
---
include/block/graph-lock.h | 30
re is the
reader, but only if there is one.
If instead the original AioContext gets deleted, we need to transfer the
current amount of readers in a global shared counter, so that the writer
is always aware of all readers.
Signed-off-by: Emanuele Giuseppe Esposito
---
block/graph-lock
Drain performed in coroutine schedules a bh in the main loop.
However, drain itself still is a read, and we need to signal
the writer that we are going through the graph.
Signed-off-by: Emanuele Giuseppe Esposito
---
block/mirror.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/block
It seems that aio_wait_kick always required a memory barrier
or atomic operation in the caller, but almost nobody actually
took care of doing it.
Let's put the barrier in the function instead.
Signed-off-by: Emanuele Giuseppe Esposito
---
util/aio-wait.c | 3 ++-
1 file changed, 2 insertions
Remove the old assert_bdrv_graph_writable, and replace it with
the new version using graph-lock API.
See the function documentation for more information.
Signed-off-by: Emanuele Giuseppe Esposito
---
block.c| 8
block/graph-lock.c
omit all functions protected by the added lock to avoid having duplicates
when querying for new callbacks.
Emanuele Giuseppe Esposito (8):
aio_wait_kick: add missing memory barrier
coroutine-lock: release lock when restarting all coroutines
block: introduce a lock to protect graph operations
Add/remove the AioContext in aio_context_list in graph-lock.c only when
it is being effectively created/destroyed.
Signed-off-by: Emanuele Giuseppe Esposito
---
util/async.c | 4
util/meson.build | 1 +
2 files changed, 5 insertions(+)
diff --git a/util/async.c b/util/async.c
index
ection.
Therefore change ->attach and ->detach to return true if they
are modifying the lock, so that we don't take it twice or release
temporarly.
Signed-off-by: Emanuele Giuseppe Esposito
---
block.c | 31 +++
block/block-backend.c
Am 26/04/2022 um 10:51 schrieb Emanuele Giuseppe Esposito:
> Luckly, most of the cases where we recursively go through a graph are
> the BlockDriverState callback functions in block_int-common.h
> In order to understand what to protect, I categorized the callbacks in
> block_
Am 09/03/2022 um 14:26 schrieb Emanuele Giuseppe Esposito:
>>> * Drains allow the caller (either main loop or iothread running
>>> the context) to wait all in_flights requests and operations
>>> of a BDS: normal drains target a given node and is parents, while
&g
n the next series? So that I can follow
too.
Thank you,
Emanuele
>From 84fcea52c09024adcfe24bb0d6d2ec6842c6826b Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito
Date: Tue, 17 May 2022 13:35:54 -0400
Subject: [PATCH] block-coroutine-wrapper: remove includes from coroutines.h
These incl
Am 17/05/2022 um 12:59 schrieb Stefan Hajnoczi:
> On Wed, May 04, 2022 at 02:39:05PM +0100, Stefan Hajnoczi wrote:
>> On Tue, Apr 26, 2022 at 04:51:06AM -0400, Emanuele Giuseppe Esposito wrote:
>>> This is a new attempt to replace the need to take the AioContext lock to
drv_drained_end() instead. They are "mixed"
> functions that can be called from coroutine context. Unlike
> bdrv_co_drain(), these functions provide control of the length of the
> drained section, which is usually the right thing.
>
> Signed-off-by: Stefan Hajnoczi
R
Am 23/05/2022 um 15:15 schrieb Stefan Hajnoczi:
> On Mon, May 23, 2022 at 10:48:48AM +0200, Emanuele Giuseppe Esposito wrote:
>>
>>
>> Am 22/05/2022 um 17:06 schrieb Stefan Hajnoczi:
>>> On Wed, May 18, 2022 at 06:14:17PM +0200, Kevin Wolf wrote:
>>>>
Am 22/05/2022 um 17:06 schrieb Stefan Hajnoczi:
> On Wed, May 18, 2022 at 06:14:17PM +0200, Kevin Wolf wrote:
>> Am 18.05.2022 um 14:43 hat Paolo Bonzini geschrieben:
>>> On 5/18/22 14:28, Emanuele Giuseppe Esposito wrote:
>>>> For example, all callers of bdrv_ope
.
Suggested-by: Paolo Bonzini
Signed-off-by: Emanuele Giuseppe Esposito
---
include/block/aio-wait.h | 2 ++
util/aio-wait.c | 16 +++-
2 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h
index b39eefb38d..54840f8622
Am 24/05/2022 um 09:08 schrieb Paolo Bonzini:
> On 5/23/22 18:04, Vladimir Sementsov-Ogievskiy wrote:
>>
>> I have a doubt about how aio_wait_bh_oneshot() works. Exactly, I see
>> that data->done is not accessed atomically, and doesn't have any
>> barrier protecting it..
>>
>> Is following
Am 24/05/2022 um 14:10 schrieb Kevin Wolf:
> Am 18.05.2022 um 14:28 hat Emanuele Giuseppe Esposito geschrieben:
>> label: // read till the end to see why I wrote this here
>>
>> I was hoping someone from the "No" party would answer to your question,
>> bec
Am 24/06/2022 um 17:28 schrieb Paolo Bonzini:
> On 6/24/22 16:29, Kevin Wolf wrote:
>> Yes, I think Vladimir is having the same difficulties with reading the
>> series as I had. And I believe his suggestion would make the
>> intermediate states less impossible to review. The question is how
Am 22/06/2022 um 20:38 schrieb Vladimir Sementsov-Ogievskiy:
> On 6/22/22 17:26, Emanuele Giuseppe Esposito wrote:
>>
>>
>> Am 21/06/2022 um 19:26 schrieb Vladimir Sementsov-Ogievskiy:
>>> On 6/16/22 16:18, Emanuele Giuseppe Esposito wrote:
>>>> Wit
Am 23/06/2022 um 13:10 schrieb Vladimir Sementsov-Ogievskiy:
> On 6/23/22 12:08, Emanuele Giuseppe Esposito wrote:
>>
>>
>> Am 22/06/2022 um 20:38 schrieb Vladimir Sementsov-Ogievskiy:
>>> On 6/22/22 17:26, Emanuele Giuseppe Esposito wrote:
>>>>
>>
Am 05/07/2022 um 10:17 schrieb Emanuele Giuseppe Esposito:
>
>
> Am 05/07/2022 um 10:14 schrieb Stefan Hajnoczi:
>> On Wed, Jun 29, 2022 at 10:15:31AM -0400, Emanuele Giuseppe Esposito wrote:
>>> diff --git a/blockdev.c b/blockdev.c
>>> index 71f793c4ab..5b790
Am 05/07/2022 um 16:45 schrieb Stefan Hajnoczi:
> On Thu, Jun 09, 2022 at 10:37:26AM -0400, Emanuele Giuseppe Esposito wrote:
>> @@ -946,17 +955,20 @@ static void virtio_blk_reset(VirtIODevice *vdev)
>> * stops all Iothreads.
>> */
>
Am 05/07/2022 um 16:39 schrieb Stefan Hajnoczi:
> On Thu, Jun 09, 2022 at 10:37:25AM -0400, Emanuele Giuseppe Esposito wrote:
>> Just as done in the block API, mark functions in virtio-blk
>> that are called also from iothread(s).
>>
>> We know such functions a
Am 08/07/2022 um 10:42 schrieb Emanuele Giuseppe Esposito:
> Hello everyone,
>
> As you all know, I am trying to find a way to replace the well known
> AioContext lock with something else that makes sense and provides the
> same (or even better) guarantees than using this lock.
Am 05/07/2022 um 16:11 schrieb Stefan Hajnoczi:
> On Thu, Jun 09, 2022 at 10:37:20AM -0400, Emanuele Giuseppe Esposito wrote:
>> @@ -146,7 +147,6 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev)
>>
>> s->dataplane_starting = false;
>>
Am 08/07/2022 um 11:33 schrieb Emanuele Giuseppe Esposito:
>
>
> Am 05/07/2022 um 16:45 schrieb Stefan Hajnoczi:
>> On Thu, Jun 09, 2022 at 10:37:26AM -0400, Emanuele Giuseppe Esposito wrote:
>>> @@ -946,17 +955,20 @@ static void virtio_blk_reset(VirtIODevice *vde
Am 05/07/2022 um 16:23 schrieb Stefan Hajnoczi:
> On Thu, Jun 09, 2022 at 10:37:22AM -0400, Emanuele Giuseppe Esposito wrote:
>> diff --git a/hw/block/dataplane/virtio-blk.c
>> b/hw/block/dataplane/virtio-blk.c
>> index f9224f23d2..03e10a36a4 100644
>> --- a/hw/b
Hello everyone,
As you all know, I am trying to find a way to replace the well known
AioContext lock with something else that makes sense and provides the
same (or even better) guarantees than using this lock.
The reason for this change have been explained over and over and I don't
really want
Am 08/07/2022 um 21:25 schrieb Vladimir Sementsov-Ogievskiy:
>> static bool job_started(Job *job)
>
> So we can call it both with mutex locked and without. Hope it never race
> with job_start.
Where exactly do you see it called with mutex not held?
I don't see it anywhere, and if you agree
Am 21/06/2022 um 17:03 schrieb Vladimir Sementsov-Ogievskiy:
> On 6/16/22 16:18, Emanuele Giuseppe Esposito wrote:
>> In preparation to the job_lock/unlock usage, create _locked
>> duplicates of some functions, since they will be sometimes called with
>> job_mutex held
Am 21/06/2022 um 19:26 schrieb Vladimir Sementsov-Ogievskiy:
> On 6/16/22 16:18, Emanuele Giuseppe Esposito wrote:
>> With the*nop* job_lock/unlock placed, rename the static
>> functions that are always under job_mutex, adding "_locked" suffix.
>>
>> L
Am 22/06/2022 um 20:38 schrieb Vladimir Sementsov-Ogievskiy:
> On 6/22/22 17:26, Emanuele Giuseppe Esposito wrote:
>>
>>
>> Am 21/06/2022 um 19:26 schrieb Vladimir Sementsov-Ogievskiy:
>>> On 6/16/22 16:18, Emanuele Giuseppe Esposito wrote:
>>>> Wit
Am 28/06/2022 um 12:47 schrieb Vladimir Sementsov-Ogievskiy:
> On 6/28/22 10:40, Emanuele Giuseppe Esposito wrote:
>>
>>
>> Am 22/06/2022 um 20:38 schrieb Vladimir Sementsov-Ogievskiy:
>>> On 6/22/22 17:26, Emanuele Giuseppe Esposito wrote:
>>>>
>>
Am 24/06/2022 um 20:22 schrieb Vladimir Sementsov-Ogievskiy:
> I've already acked this (honestly, because Stefan do), but still, want
> to clarify:
>
> On 6/16/22 16:18, Emanuele Giuseppe Esposito wrote:
>> job mutex will be used to protect the job struct elements and
Am 11/07/2022 um 14:04 schrieb Vladimir Sementsov-Ogievskiy:
> On 7/6/22 23:15, Emanuele Giuseppe Esposito wrote:
>> Just as done with job.h, create _locked() functions in blockjob.h
>>
>> These functions will be later useful when caller has already taken
>> the
From: Paolo Bonzini
We want to make sure access of job->aio_context is always done
under either BQL or job_mutex. The problem is that using
aio_co_enter(job->aiocontext, job->co) in job_start and job_enter_cond
makes the coroutine immediately resume, so we can't hold the job lock.
And caching it
change the nop into
an actual mutex and remove the aiocontext lock.
Since job_mutex is already being used, add static
real_job_{lock/unlock} for the existing usage.
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed-by: Stefan Hajnoczi
Reviewed-by: Vladimir Sementsov-Ogievskiy
---
include
job_event_* functions can all be static, as they are not used
outside job.c.
Same applies for job_txn_add_job().
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed-by: Stefan Hajnoczi
Reviewed-by: Vladimir Sementsov-Ogievskiy
---
include/qemu/job.h | 18 --
job.c
use JOB_LOCK_GUARD and WITH_JOB_LOCK_GUARD
* mu(u)ltiple typos in commit messages
* job API split patches are sent separately in another series
* use of empty job_{lock/unlock} and JOB_LOCK_GUARD/WITH_JOB_LOCK_GUARD
to avoid deadlocks and simplify the reviewer job
* move patch 11 (block_job_query:
Same as AIO_WAIT_WHILE macro, but if we are in the Main loop
do not release and then acquire ctx_ 's aiocontext.
Once all Aiocontext locks go away, this macro will replace
AIO_WAIT_WHILE.
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed-by: Stefan Hajnoczi
Reviewed-by: Vladimir Sementsov
ned-off-by: Emanuele Giuseppe Esposito
---
blockdev.c | 74 +---
include/qemu/job.h | 22 -
job-qmp.c| 44 -
job.c| 82 ++--
tests/unit/t
*.
Signed-off-by: Emanuele Giuseppe Esposito
---
blockdev.c | 67 +-
job-qmp.c | 55 ++--
2 files changed, 84 insertions(+), 38 deletions(-)
diff --git a/blockdev.c b/blockdev.c
index 9230888e34..71f793c4ab
introduce job_set_aio_context and make sure that
the context is set under BQL, job_mutex and drain.
Also make sure all other places where the aiocontext is read
are protected.
Note: at this stage, job_{lock/unlock} and job lock guard macros
are *nop*.
Suggested-by: Paolo Bonzini
Signed-off-by: Emanuel
job_pause() is/will be only used
in tests to avoid:
WITH_JOB_LOCK_GUARD(){
job_pause_locked();
}
then it is not worth keeping job_pause(), and just use the guard.
Note: at this stage, job_{lock/unlock} and job lock guard macros
are *nop*.
Signed-off-by: Emanuele Giuseppe Esposito
---
tests/unit
Once job lock is used and aiocontext is removed, mirror has
to perform job operations under the same critical section,
using the helpers prepared in previous commit.
Note: at this stage, job_{lock/unlock} and job lock guard macros
are *nop*.
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed
We are always using the given bs AioContext, so there is no need
to take the job ones (which is identical anyways).
This also reduces the point we need to check when protecting
job.aio_context field.
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed-by: Stefan Hajnoczi
---
block/commit.c | 4
-by: Emanuele Giuseppe Esposito
---
blockjob.c | 52
include/block/blockjob.h | 15
2 files changed, 57 insertions(+), 10 deletions(-)
diff --git a/blockjob.c b/blockjob.c
index 7da59a1f1c..0d59aba439 100644
--- a/blockjob.c
+++ b
They all are called with job_lock held, in job_event_*_locked()
Signed-off-by: Emanuele Giuseppe Esposito
---
blockjob.c | 25 +++--
1 file changed, 15 insertions(+), 10 deletions(-)
diff --git a/blockjob.c b/blockjob.c
index 0d59aba439..70952879d8 100644
--- a/blockjob.c
the lock internally.
Instead we want
JOB_LOCK_GUARD();
for(job = job_next_locked(); ...)
Note: at this stage, job_{lock/unlock} and job lock guard macros
are *nop*.
Signed-off-by: Emanuele Giuseppe Esposito
---
block.c| 20 +++---
blockdev.c | 12
intended.
Signed-off-by: Emanuele Giuseppe Esposito
---
blockjob.c | 20
include/qemu/job.h | 37 ++---
job.c | 15 +++
3 files changed, 49 insertions(+), 23 deletions(-)
diff --git a/blockjob.c b/blockjob.c
index
These public functions are not used anywhere, thus can be dropped.
Also, since this is the final job API that doesn't use AioContext
lock and replaces it with job_lock, adjust all remaining function
documentation to clearly specify if the job lock is taken or not.
Signed-off-by: Emanuele Giuseppe
These functions will be used later when we use the job lock.
Note: at this stage, job_{lock/unlock} and job lock guard macros
are *nop*.
Signed-off-by: Emanuele Giuseppe Esposito
---
include/qemu/job.h | 15 ---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/include
These public functions are not used anywhere, thus can be dropped.
Signed-off-by: Emanuele Giuseppe Esposito
---
blockjob.c | 30 --
include/block/blockjob.h | 33 +
2 files changed, 13 insertions(+), 50 deletions
_job_{lock/unlock}.
Note: at this stage, job_{lock/unlock} and job lock guard macros
are *nop*
.Signed-off-by: Emanuele Giuseppe Esposito
---
include/qemu/job.h | 73 +-
job.c | 607 +++--
2 files changed, 499 insertions(+), 181 deletion
Categorize the fields in struct Job to understand which ones
need to be protected by the job mutex and which don't.
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed-by: Vladimir Sementsov-Ogievskiy
---
include/qemu/job.h | 61 +++---
1 file changed, 36
Not sure what the atomic here was supposed to do, since job.busy
is protected by the job lock. Since the whole function
is called under job_mutex, just remove the atomic.
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed-by: Stefan Hajnoczi
---
blockjob.c | 2 +-
1 file changed, 1 insertion
Am 06/07/2022 um 14:36 schrieb Emanuele Giuseppe Esposito:
>
>
> Am 06/07/2022 um 14:23 schrieb Vladimir Sementsov-Ogievskiy:
>> On 7/6/22 15:05, Emanuele Giuseppe Esposito wrote:
>>>
>>>
>>> Am 05/07/2022 um 17:01 schrieb Vladimir Sementsov-Ogievskiy
Am 06/07/2022 um 14:23 schrieb Vladimir Sementsov-Ogievskiy:
> On 7/6/22 15:05, Emanuele Giuseppe Esposito wrote:
>>
>>
>> Am 05/07/2022 um 17:01 schrieb Vladimir Sementsov-Ogievskiy:
>>> On 6/29/22 17:15, Emanuele Giuseppe Esposito wrote:
>>>&g
Am 05/07/2022 um 17:01 schrieb Vladimir Sementsov-Ogievskiy:
> On 6/29/22 17:15, Emanuele Giuseppe Esposito wrote:
>> Just as done with job.h, create _locked() functions in blockjob.h
>
> We modify not only blockjob.h, I'd s/blockjob.h/blockjob/ in subject.
>
> Also,
Am 05/07/2022 um 12:53 schrieb Vladimir Sementsov-Ogievskiy:
> On 6/29/22 17:15, Emanuele Giuseppe Esposito wrote:
>> These functions don't need a _locked() counterpart, since
>> they are all called outside job.c and take the lock only
>> internally.
>>
>> Updat
Am 05/07/2022 um 12:54 schrieb Vladimir Sementsov-Ogievskiy:
> To subject: hmm, the commit don't define any function..
>
mark functions called without job lock held?
job API split patches are sent separately in another series
* use of empty job_{lock/unlock} and JOB_LOCK_GUARD/WITH_JOB_LOCK_GUARD
to avoid deadlocks and simplify the reviewer job
* move patch 11 (block_job_query: remove atomic read) as last
Emanuele Giuseppe Esposito (20):
job.c: make job_mutex and
This comment applies more on job, it was left in blockjob as in the past
the whole job logic was implemented there.
Note: at this stage, job_{lock/unlock} and job lock guard macros
are *nop*.
No functional change intended.
Signed-off-by: Emanuele Giuseppe Esposito
---
blockjob.c | 20
the lock internally.
Instead we want
JOB_LOCK_GUARD();
for(job = job_next_locked(); ...)
Note: at this stage, job_{lock/unlock} and job lock guard macros
are *nop*.
Signed-off-by: Emanuele Giuseppe Esposito
---
block.c| 20 +++---
blockdev.c | 12
change the nop into
an actual mutex and remove the aiocontext lock.
Since job_mutex is already being used, add static
real_job_{lock/unlock} for the existing usage.
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed-by: Stefan Hajnoczi
Reviewed-by: Vladimir Sementsov-Ogievskiy
---
include
Am 05/07/2022 um 15:07 schrieb Stefan Hajnoczi:
> On Wed, Jun 29, 2022 at 10:15:35AM -0400, Emanuele Giuseppe Esposito wrote:
>> Change the job_{lock/unlock} and macros to use job_mutex.
>>
>> Now that they are not nop anymore, remove the aiocontext
>> to avoid
job_event_* functions can all be static, as they are not used
outside job.c.
Same applies for job_txn_add_job().
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed-by: Stefan Hajnoczi
Reviewed-by: Vladimir Sementsov-Ogievskiy
---
include/qemu/job.h | 18 --
job.c
lock guard macros
are *nop*
Signed-off-by: Emanuele Giuseppe Esposito
---
include/qemu/job.h | 138 ++-
job.c | 605 -
2 files changed, 558 insertions(+), 185 deletions(-)
diff --git a/include/qemu/job.h b/include/qemu/job.h
index
-by: Emanuele Giuseppe Esposito
---
blockjob.c | 52
include/block/blockjob.h | 18 ++
2 files changed, 60 insertions(+), 10 deletions(-)
diff --git a/blockjob.c b/blockjob.c
index 7da59a1f1c..0d59aba439 100644
--- a/blockjob.c
+++ b
Categorize the fields in struct Job to understand which ones
need to be protected by the job mutex and which don't.
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed-by: Vladimir Sementsov-Ogievskiy
---
include/qemu/job.h | 61 +++---
1 file changed, 36
The same job lock is being used also to protect some of blockjob fields.
Categorize them just as done in job.h.
Signed-off-by: Emanuele Giuseppe Esposito
---
include/block/blockjob.h | 17 ++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/include/block/blockjob.h
*.
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed-by: Stefan Hajnoczi
---
blockdev.c | 67 +-
job-qmp.c | 57 --
2 files changed, 86 insertions(+), 38 deletions(-)
diff --git a/blockdev.c b/blockdev.c
They all are called with job_lock held, in job_event_*_locked()
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed-by: Stefan Hajnoczi
---
blockjob.c | 25 +++--
1 file changed, 15 insertions(+), 10 deletions(-)
diff --git a/blockjob.c b/blockjob.c
index bbd297b583
These public functions are not used anywhere, thus can be dropped.
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed-by: Stefan Hajnoczi
---
blockjob.c | 30 --
include/block/blockjob.h | 36 +---
2 files changed, 13
der aiocontext lock.
Also remove real_job_{lock/unlock}, as they are replaced by the
public functions.
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed-by: Stefan Hajnoczi
---
blockdev.c | 74 ---
include/qemu/job.h | 22
job-qm
These public functions are not used anywhere, thus can be dropped.
Also, since this is the final job API that doesn't use AioContext
lock and replaces it with job_lock, adjust all remaining function
documentation to clearly specify if the job lock is taken or not.
Signed-off-by: Emanuele Giuseppe
From: Paolo Bonzini
We want to make sure access of job->aio_context is always done
under either BQL or job_mutex. The problem is that using
aio_co_enter(job->aiocontext, job->co) in job_start and job_enter_cond
makes the coroutine immediately resume, so we can't hold the job lock.
And caching it
job_pause() is/will be only used
in tests to avoid:
WITH_JOB_LOCK_GUARD(){
job_pause_locked();
}
then it is not worth keeping job_pause(), and just use the guard.
Note: at this stage, job_{lock/unlock} and job lock guard macros
are *nop*.
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed
Not sure what the atomic here was supposed to do, since job.busy
is protected by the job lock. Since the whole function
is called under job_mutex, just remove the atomic.
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Emanuele Giuseppe Esposito
---
blockjob.c | 2 +-
1 file changed, 1 insertion
Same as AIO_WAIT_WHILE macro, but if we are in the Main loop
do not release and then acquire ctx_ 's aiocontext.
Once all Aiocontext locks go away, this macro will replace
AIO_WAIT_WHILE.
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed-by: Stefan Hajnoczi
Reviewed-by: Vladimir Sementsov
We are always using the given bs AioContext, so there is no need
to take the job ones (which is identical anyways).
This also reduces the point we need to check when protecting
job.aio_context field.
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Emanuele Giuseppe Esposito
---
block/commit.c | 4
Once job lock is used and aiocontext is removed, mirror has
to perform job operations under the same critical section,
using the helpers prepared in previous commit.
Note: at this stage, job_{lock/unlock} and job lock guard macros
are *nop*.
Signed-off-by: Emanuele Giuseppe Esposito
Reviewed
introduce job_set_aio_context and make sure that
the context is set under BQL, job_mutex and drain.
Also make sure all other places where the aiocontext is read
are protected.
Note: at this stage, job_{lock/unlock} and job lock guard macros
are *nop*.
Suggested-by: Paolo Bonzini
Signed-off-by: Emanuel
called under
job lock.
Signed-off-by: Emanuele Giuseppe Esposito
---
blockjob.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/blockjob.c b/blockjob.c
index a2559b97a7..893c8ff08e 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -367,7 +367,8 @@ BlockJobInfo *block_job_query
Am 05/07/2022 um 09:39 schrieb Stefan Hajnoczi:
> On Wed, Jun 29, 2022 at 10:15:23AM -0400, Emanuele Giuseppe Esposito wrote:
>> +void job_ref(Job *job)
>> +{
>> +JOB_LOCK_GUARD();
>> +job_ref_locked(job);
>> +}
>
> You don't need to fix thi
Am 05/07/2022 um 09:58 schrieb Stefan Hajnoczi:
> On Wed, Jun 29, 2022 at 10:15:26AM -0400, Emanuele Giuseppe Esposito wrote:
>> +BlockJob *block_job_next(BlockJob *bjob)
>> {
>> -Job *job = job_get(id);
>> +JOB_LOCK_GUARD();
>> +
Am 05/07/2022 um 10:14 schrieb Stefan Hajnoczi:
> On Wed, Jun 29, 2022 at 10:15:31AM -0400, Emanuele Giuseppe Esposito wrote:
>> diff --git a/blockdev.c b/blockdev.c
>> index 71f793c4ab..5b79093155 100644
>> --- a/blockdev.c
>> +++ b/blockdev.c
&g
Am 07/06/2022 um 17:41 schrieb Paolo Bonzini:
> On 6/7/22 15:20, Emanuele Giuseppe Esposito wrote:
>>
>>
>> Am 03/06/2022 um 18:00 schrieb Kevin Wolf:
>>> Am 14.03.2022 um 14:36 hat Emanuele Giuseppe Esposito geschrieben:
>>>> Categorize the fie
Am 03/06/2022 um 18:40 schrieb Kevin Wolf:
> Am 14.03.2022 um 14:36 hat Emanuele Giuseppe Esposito geschrieben:
>> Introduce the job locking mechanism through the whole job API,
>> following the comments in job.h and requirements of job-monitor
>> (like the functions in j
Am 03/06/2022 um 18:00 schrieb Kevin Wolf:
> Am 14.03.2022 um 14:36 hat Emanuele Giuseppe Esposito geschrieben:
>> Categorize the fields in struct Job to understand which ones
>> need to be protected by the job mutex and which don't.
>>
>> Signed-off-by: Emanuel
Am 03/06/2022 um 18:59 schrieb Kevin Wolf:
> Am 14.03.2022 um 14:37 hat Emanuele Giuseppe Esposito geschrieben:
>> From: Paolo Bonzini
>>
>> We want to make sure access of job->aio_context is always done
>> under either BQL or job_mutex. The problem is
Am 03/06/2022 um 18:17 schrieb Kevin Wolf:
> Am 14.03.2022 um 14:36 hat Emanuele Giuseppe Esposito geschrieben:
>> In preparation to the job_lock/unlock usage, create _locked
>> duplicates of some functions, since they will be sometimes called with
>> job_mutex held
901 - 1000 of 1403 matches
Mail list logo