21.06.2021 10:59, Emanuele Giuseppe Esposito wrote:
On 19/06/2021 17:23, Vladimir Sementsov-Ogievskiy wrote:
14.06.2021 10:33, Emanuele Giuseppe Esposito wrote:
As done in BlockCopyCallState, categorize BlockCopyTask
and BlockCopyState in IN, State and OUT fields.
This is just to understand which field has to be protected with a lock.
.sleep_state is handled in the series "coroutine: new sleep/wake API"
and thus here left as TODO.
Signed-off-by: Emanuele Giuseppe Esposito <eespo...@redhat.com>
---
block/block-copy.c | 49 +++++++++++++++++++++++++++++-----------------
1 file changed, 31 insertions(+), 18 deletions(-)
diff --git a/block/block-copy.c b/block/block-copy.c
index 3f26be8ddc..5ff7764e87 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -52,29 +52,35 @@ typedef struct BlockCopyCallState {
/* Coroutine where async block-copy is running */
Coroutine *co;
- /* To reference all call states from BlockCopyState */
- QLIST_ENTRY(BlockCopyCallState) list;
-
/* State */
- int ret;
bool finished;
- QemuCoSleep sleep;
- bool cancelled;
+ QemuCoSleep sleep; /* TODO: protect API with a lock */
+
+ /* To reference all call states from BlockCopyState */
+ QLIST_ENTRY(BlockCopyCallState) list;
/* OUT parameters */
+ bool cancelled;
bool error_is_read;
+ int ret;
} BlockCopyCallState;
typedef struct BlockCopyTask {
AioTask task;
+ /*
+ * IN parameters. Initialized in block_copy_task_create()
+ * and never changed.
+ */
That's just not true for method field :(
You're right, because it is modified later in the while loop of
block_copy_dirty_clusters, but the task is already in the list.
Will move it to state field.
BlockCopyState *s;
BlockCopyCallState *call_state;
int64_t offset;
- int64_t bytes;
BlockCopyMethod method;
- QLIST_ENTRY(BlockCopyTask) list;
+
+ /* State */
CoQueue wait_queue; /* coroutines blocked on this task */
+ int64_t bytes;
+ QLIST_ENTRY(BlockCopyTask) list;
} BlockCopyTask;
static int64_t task_end(BlockCopyTask *task)
@@ -90,15 +96,25 @@ typedef struct BlockCopyState {
*/
BdrvChild *source;
BdrvChild *target;
- BdrvDirtyBitmap *copy_bitmap;
+
+ /* State */
int64_t in_flight_bytes;
- int64_t cluster_size;
BlockCopyMethod method;
- int64_t max_transfer;
- uint64_t len;
QLIST_HEAD(, BlockCopyTask) tasks; /* All tasks from all block-copy calls
*/
QLIST_HEAD(, BlockCopyCallState) calls;
+ /* State fields that use a thread-safe API */
+ BdrvDirtyBitmap *copy_bitmap;
+ ProgressMeter *progress;
+ SharedResource *mem;
+ RateLimit rate_limit;
+ /*
+ * IN parameters. Initialized in block_copy_state_new()
+ * and never changed.
+ */
+ int64_t cluster_size;
+ int64_t max_transfer;
+ uint64_t len;
BdrvRequestFlags write_flags;
/*
@@ -114,14 +130,11 @@ typedef struct BlockCopyState {
* In this case, block_copy() will query the source’s allocation status,
* skip unallocated regions, clear them in the copy_bitmap, and invoke
* block_copy_reset_unallocated() every time it does.
+ *
+ * This field is set in backup_run() before coroutines are run,
+ * therefore is an IN.
That's not true: it is modified from backup_run, when block-copy already
initialized, and block_copy() calls may be done from backup-top filter.
Ok, I am not very familiar with the backup code, so I did not see it.
This means we need to protect this field too.
At this point, I think we can increase the granularity of the lock in
block_copy_dirty_clusters:
instead of having in each while loop
block_copy_task_create(); // locks and releases internally
block_copy_block_status(); // no lock used, but uses skip_unallocated so it
will need one
if()
block_copy_task_shrink(); // locks and releases internally
if(s->skip_unallocated) // will need lock
block_copy_task_end(); // locks and releases internally
[..]
if()
task->method = COPY_WRITE_ZEROS; // needs lock
[..]
we can just lock the while section and eventually create _locked() version of
task_end and similar functions that are also used in non-locked code blocks.
No, holding lock during block_copy_block_status is bad idea, as the function
may yield (it call block_status).
I tend to agree that making atomic access to skip_unallocated is porbably the
simplest way.
*/
bool skip_unallocated;
-
- ProgressMeter *progress;
-
- SharedResource *mem;
-
- RateLimit rate_limit;
} BlockCopyState;
static BlockCopyTask *find_conflicting_task(BlockCopyState *s,
--
Best regards,
Vladimir