Notes

This change addresses the same deficiencies that Richard Yao is
addressing in the following pull requests:

 - https://github.com/zfsonlinux/zfs/pull/6133
 - https://github.com/openzfs/openzfs/pull/404

Problem

The current implementation of zil_commit() can introduce significant
latency, beyond what is inherent due to the latency of the underlying
storage. The additional latency comes from two main problems:

 1. When there's outstanding ZIL blocks being written (i.e. there's
    already a "writer thread" in progress), then any new calls to
    zil_commit() will block waiting for the currently oustanding ZIL
    blocks to complete. The blocks written for each "writer thread" is
    coined a "batch", and there can only ever be a single "batch" being
    written at a time. When a batch is being written, any new ZIL
    transactions will have to wait for the next batch to be written,
    which won't occur until the current batch finishes.

    As a result, the underlying storage may not be used as efficiently
    as possible. While "new" threads enter zil_commit() and are blocked
    waiting for the next batch, it's possible that the underlying
    storage isn't fully utilized by the current batch of ZIL blocks. In
    that case, it'd be better to allow these new threads to generate
    (and issue) a new ZIL block, such that it could be serviced by the
    underlying storage concurrently with the other ZIL blocks that are
    being serviced.

 2. Any call to zil_commit() must wait for all ZIL blocks in its "batch"
    to complete, prior to zil_commit() returning. The size of any given
    batch is proportional to the number of ZIL transaction in the queue
    at the time that the batch starts processing the queue; which
    doesn't occur until the previous batch completes. Thus, if there's a
    lot of transactions in the queue, the batch could be composed of
    many ZIL blocks, and each call to zil_commit() will have to wait for
    all of these writes to complete (even if the thread calling
    zil_commit() only cared about one of the transactions in the batch).

To further complicate the situation, these two issues result in the
following side effect:

 3. If a given batch takes longer to complete than normal, this results
    in larger batch sizes, which then take longer to complete and
    further drive up the latency of zil_commit(). This can occur for a
    number of reasons, including (but not limited to): transient changes
    in the workload, and storage latency irregularites.

Solution

The solution attempted by this change has the following goals:

 1. no on-disk changes; maintain current on-disk format.
 2. modify the "batch size" to be equal to the "ZIL block size".
 3. allow new batches to be generated and issued to disk, while there's
    already batches being serviced by the disk.
 4. allow zil_commit() to wait for as few ZIL blocks as possible.
 5. use as few ZIL blocks as possible, for the same amount of ZIL
    transactions, without introducing significant latency to any
    individual ZIL transaction. i.e. use fewer, but larger, ZIL blocks.

In theory, with these goals met, the new allgorithm will allow the
following improvements:

 1. new ZIL blocks can be generated and issued, while there's already
    oustanding ZIL blocks being serviced by the storage.
 2. the latency of zil_commit() should be proportional to the underlying
    storage latency, rather than the incoming synchronous workload.

Where to Start w.r.t. Reviews

For those attempting to review this change, I suggest the following order:

 1. to familiarize yourself with the high level design this change is
    shooting for, read the comment directly above zil_commit().
 2. if anything in that comment is unclear, not true, and/or it's
    missing critical information, please open review a comment so that
    can be addressed.
 3. If you're already familiar with the data structures of the ZIL, now
    would be a good time to review the modifications I made, as a
    precursor to the algorithm changes. If you're not already familiar,
    it's probably best to do this after scanning the algorithm, at which
    point you'll hopefully have more context to fall back on.
 4. briefly read through zil_commit(), zil_commit_writer(), and
    zil_commit_waiter() to get familiar with the "flow" of the code
    w.r.t. the thread(s) calling zil_commit().
 5. briefly read through zil_lwb_write_done(), zil_lwb_flush_vdevs(),
    and zil_lwb_flush_vdevs_done() to get familiar with how the ZIL
    blocks complete, and how the "commit waiters" are signalled.

At this point, I would hope that you'll have a decent grasp of the
changes at a high level, such that you'll be able to navigate the code
on your own, and rely on the code comments for further inspection and
understanding. If that's not the case, I should probably address it
with more code comments, so please open review issues so I can do that.

Performance Testing

I used two fio workloads to drive a synchronous write workload. In both
cases, the performance of this change was pitted against the baseline
"master" branch.

The first workload had each fio thread trying to perform synchronous
writes as fast as it could; issuing a single write at at time. The
results from this test can be seen here: https://goo.gl/jBUiH5

For the "tl;dr", use these two links that jump directly to the section
that describe the percentage difference in fio IOPs and fio write
latency when running on both HDDs and SSDs:

 - HDD drives, IOPs: https://goo.gl/iChbUh
 - SSD drives, IOPs: https://goo.gl/97u2LA

 - HDD drives, latency: https://goo.gl/RCGxr4
 - SSD drives, latency: https://goo.gl/n7DQ1D

The second workload also had fio issuing synchronous writes, but instead
of attempting to perform as many operations as fast as each thread could,
each fio thread attempted to achieve about 64 write operations per
second. The results of this test can be seen here: https://goo.gl/xEPGm4

Additionally, the following links will jump directly to the percentage
difference information between the baseline and the project branch:

 - HDD drives, IOPs: https://goo.gl/rgk7m4
 - SSD drives, IOPs: https://goo.gl/jCSVnH

 - HDD drives, latency: https://goo.gl/qK9E5q
 - SSD drives, latency: https://goo.gl/dGAXk1

Upstream bugs: DLPX-52375
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/421

-- Commit Summary --

  * 0000 improve batching done in zil_commit()

-- File Changes --

    M usr/src/cmd/ztest/zloop.bash (1)
    M usr/src/cmd/ztest/ztest.c (11)
    M usr/src/uts/common/fs/zfs/dmu.c (7)
    M usr/src/uts/common/fs/zfs/sys/dmu.h (2)
    M usr/src/uts/common/fs/zfs/sys/zil.h (8)
    M usr/src/uts/common/fs/zfs/sys/zil_impl.h (72)
    M usr/src/uts/common/fs/zfs/sys/zio.h (1)
    M usr/src/uts/common/fs/zfs/txg.c (12)
    M usr/src/uts/common/fs/zfs/zfs_vnops.c (12)
    M usr/src/uts/common/fs/zfs/zil.c (1441)
    M usr/src/uts/common/fs/zfs/zio.c (33)
    M usr/src/uts/common/fs/zfs/zvol.c (15)
    M usr/src/uts/common/sys/debug.h (5)
    M usr/src/uts/common/sys/time.h (3)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/421.patch
https://github.com/openzfs/openzfs/pull/421.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/421

------------------------------------------
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T6a034fc77ec64e5d-Mfbf0962675af454f92938f66
Powered by Topicbox: https://topicbox.com

Reply via email to