On 1/11/24 16:06, Kevin Wolf wrote:
Am 10.01.2024 um 20:50 hat Ari Sundholm geschrieben:
During the review of a fix for a concurrency issue in blklogwrites,
it was found that the driver needs an additional fix when enabling
multiqueue, which is a new feature introduced in QEMU 9.0, as the
driver state may be read and written by multiple threads at the same
time, which was not the case when the driver was originally written.

Fix the multi-threaded scenario by introducing a mutex to protect the
mutable fields in the driver state, and always having the mutex locked
by the current thread when accessing them.

Additionally, add the const qualifier to a few BDRVBlkLogWritesState
pointer targets in contexts where the driver state is not written to.

Signed-off-by: Ari Sundholm <a...@tuxera.com>
---
  block/blklogwrites.c | 29 +++++++++++++++++++++++------
  1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/block/blklogwrites.c b/block/blklogwrites.c
index ba717dab4d..50f68df2a6 100644
--- a/block/blklogwrites.c
+++ b/block/blklogwrites.c
@@ -3,7 +3,7 @@
   *
   * Copyright (c) 2017 Tuomas Tynkkynen <tuo...@tuxera.com>
   * Copyright (c) 2018 Aapo Vienamo <a...@tuxera.com>
- * Copyright (c) 2018 Ari Sundholm <a...@tuxera.com>
+ * Copyright (c) 2018-2024 Ari Sundholm <a...@tuxera.com>
   *
   * This work is licensed under the terms of the GNU GPL, version 2 or later.
   * See the COPYING file in the top-level directory.
@@ -55,9 +55,18 @@ typedef struct {
      BdrvChild *log_file;
      uint32_t sectorsize;
      uint32_t sectorbits;
+    uint64_t update_interval;
+
+    /*
+     * The mutable state of the driver, consisting of the current log sector
+     * and the number of log entries.
+     *
+     * May be read and/or written from multiple threads, and the mutex must be
+     * held when accessing these fields.
+     */
      uint64_t cur_log_sector;
      uint64_t nr_entries;
-    uint64_t update_interval;
+    QemuMutex mutex;
  } BDRVBlkLogWritesState;
static QemuOptsList runtime_opts = {
@@ -149,6 +158,7 @@ static int blk_log_writes_open(BlockDriverState *bs, QDict 
*options, int flags,
      uint64_t log_sector_size;
      bool log_append;
+ qemu_mutex_init(&s->mutex);
      opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
      if (!qemu_opts_absorb_qdict(opts, options, errp)) {
          ret = -EINVAL;
@@ -255,6 +265,7 @@ fail_log:
          bdrv_unref_child(bs, s->log_file);
          bdrv_graph_wrunlock();
          s->log_file = NULL;
+        qemu_mutex_destroy(&s->mutex);
      }
  fail:
      qemu_opts_del(opts);
@@ -269,6 +280,7 @@ static void blk_log_writes_close(BlockDriverState *bs)
      bdrv_unref_child(bs, s->log_file);
      s->log_file = NULL;
      bdrv_graph_wrunlock();
+    qemu_mutex_destroy(&s->mutex);
  }
static int64_t coroutine_fn GRAPH_RDLOCK
@@ -295,7 +307,7 @@ static void blk_log_writes_child_perm(BlockDriverState *bs, 
BdrvChild *c,
static void blk_log_writes_refresh_limits(BlockDriverState *bs, Error **errp)
  {
-    BDRVBlkLogWritesState *s = bs->opaque;
+    const BDRVBlkLogWritesState *s = bs->opaque;
      bs->bl.request_alignment = s->sectorsize;
  }
@@ -338,15 +350,18 @@ blk_log_writes_co_do_log(BlkLogWritesLogReq *lr)
       * driver may be modified by other driver operations while waiting for the
       * I/O to complete.
       */
+    qemu_mutex_lock(&s->mutex);
      const uint64_t entry_start_sector = s->cur_log_sector;
      const uint64_t entry_offset = entry_start_sector << s->sectorbits;
      const uint64_t qiov_aligned_size = ROUND_UP(lr->qiov->size, 
s->sectorsize);
      const uint64_t entry_aligned_size = qiov_aligned_size +
          ROUND_UP(lr->zero_size, s->sectorsize);
      const uint64_t entry_nr_sectors = entry_aligned_size >> s->sectorbits;
+    const uint64_t entry_seq = s->nr_entries + 1;
- s->nr_entries++;
+    s->nr_entries = entry_seq;
      s->cur_log_sector += entry_nr_sectors;
+    qemu_mutex_unlock(&s->mutex);
/*
       * Write the log entry. Note that if this is a "write zeroes" operation,
@@ -366,14 +381,16 @@ blk_log_writes_co_do_log(BlkLogWritesLogReq *lr)
/* Update super block on flush or every update interval */
      if (lr->log_ret == 0 && ((lr->entry.flags & LOG_FLUSH_FLAG)
-        || (s->nr_entries % s->update_interval == 0)))
+        || (entry_seq % s->update_interval == 0)))
      {
+        qemu_mutex_lock(&s->mutex);
          struct log_write_super super = {
              .magic      = cpu_to_le64(WRITE_LOG_MAGIC),
              .version    = cpu_to_le64(WRITE_LOG_VERSION),
              .nr_entries = cpu_to_le64(s->nr_entries),
              .sectorsize = cpu_to_le32(s->sectorsize),
          };
+        qemu_mutex_unlock(&s->mutex);

This hunk looks odd. Is the only thing the lock does here that
s->nr_entries is accessed atomically?


Yes. We want to make sure that the latest value is being used.

Looking a bit closer, if s->nr_entries has been changed (increased)
meanwhile by another request, I assume that we indeed want the newer
value to be stored in the superblock rather than using the value that
was current when we did the calculations. So superficially, this part
looks good.


Yes, that is the intention.

But the other thing I notice is that a few lines down you can get
concurrent write requests to the superblock, and there is no guaranteed
order, so an older update could overwrite a newer one. Don't we need to
serialise writes to the superblock? (I would actually expect this to be
a problem already without multithreading.)


That's a good point. Maybe I should add a flag indicating that the superblock is being updated. But that may be too simplistic, as an older superblock update should neither clobber a newer one nor prevent a newer one from being performed, and we do not want to hold the mutex while doing I/O. I'll think this through and post a v2.

Thanks,
Ari

Kevin



Reply via email to