I would really love to hear opinions on this, since we already had some discussions on other similar patches.
Thank you, Emanuele On 01/03/2022 15:21, Emanuele Giuseppe Esposito wrote: > This serie tries to provide a proof of concept and a clear explanation > on why we need to use drains (and more precisely subtree_drains) > to replace the aiocontext lock, especially to protect BlockDriverState > ->children and ->parent lists. > > Just a small recap on the key concepts: > * We split block layer APIs in "global state" (GS), "I/O", and > "global state or I/O". > GS are running in the main loop, under BQL, and are the only > one allowed to modify the BlockDriverState graph. > > I/O APIs are thread safe and can run in any thread > > "global state or I/O" are essentially all APIs that use > BDRV_POLL_WHILE. This is because there can be only 2 threads > that can use BDRV_POLL_WHILE: main loop and the iothread that > runs the aiocontext. > > * Drains allow the caller (either main loop or iothread running > the context) to wait all in_flights requests and operations > of a BDS: normal drains target a given node and is parents, while > subtree ones also include the subgraph of the node. Siblings are > not affected by any of these two kind of drains. > After bdrv_drained_begin, no more request is allowed to come > from the affected nodes. Therefore the only actor left working > on a drained part of the graph should be the main loop. > > What do we intend to do > ----------------------- > We want to remove the AioContext lock. It is not 100% clear on how > many things we are protecting with it, and why. > As a starter, we want to protect BlockDriverState ->parents and > ->children lists, since they are read by main loop and I/O but > only written by main loop under BQL. The function that modifies > these lists is bdrv_replace_child_common(). > > How do we want to do it > ----------------------- > We individuated as ideal subtitute of AioContext lock > the subtree_drain API. The reason is simple: draining prevents the iothread > to read or write the nodes, so once the main loop finishes > executing bdrv_drained_begin() on the interested graph, we are sure that > the iothread is not going to look or even interfere with that part of the > graph. > We are also sure that the only two actors that can look at a specific > BlockDriverState in any given context are the main loop and the > iothread running the AioContext (ensured by "global state or IO" logic). > > Why use _subtree_ instead of normal drain > ----------------------------------------- > A simple drain "blocks" a given node and all its parents. > But it doesn't touch the child. > This means that if we use a simple drain, a child can always > keep processing requests, and eventually end up calling itself > bdrv_drained_begin, ending up reading the parents node while the main loop > is modifying them. Therefore a subtree drain is necessary. > > Possible scenarios > ------------------- > Keeping in mind that we can only have an iothread and the main loop > draining on a certain node, we could have: > > main loop successfully drains and then iothread tries to drain: > impossible scenario, as iothread is already stopped once main > successfully drains. > > iothread successfully drains and then main loop drains: > should not be a problem, as: > 1) the iothread should be already "blocked" by its own drain > 2) main loop would still wait for it to completely block > There is the issue of mirror overriding such scenario to avoid > having deadlocks, but that is handled in the next section. > > main loop and iothread try to drain together: > As above, this case doens't really matter. As long as > bdrv_drained_begin invariant is respected, the main loop will > continue only once the iothread is "blocked" on that part of the graph. > > A note on iothread draining > --------------------------- > Theoretically draining from an iothread should not be possible, > as the iothread would be scheduling a bh in the main loop waiting > for itself to stop, even though it is not yet stopped since it is waiting for > the bh. > > This is what would happen in the tests in patch 5 if .drained_poll > was not implemented. > > Therefore, one solution is to use .drained_poll callback in BlockJobDriver. > This callback overrides the default job poll() behavior, and > allows the polling condition to stop waiting for the job. > It is actually used only in mirror. > This however breaks bdrv_drained_begin invariant, because the > iothread is not really blocked on that node but continues running. > In order to fix this, patch 4 allows the polling condition to be > used only by the iothread, and not the main loop too, preventing > the drain to return before the iothread is effectively stopped. > This is also shown in the tests in patch 5. If the fix in patch > 4 is removed, then the main loop drain will return earlier and > allow the iothread to run and drain together. > > The other patches in this serie are cherry-picked from the various > series I already sent, and are included here just to allow > subtree_drained_begin/end_unlocked implementation. > > Emanuele Giuseppe Esposito (5): > aio-wait.h: introduce AIO_WAIT_WHILE_UNLOCKED > introduce BDRV_POLL_WHILE_UNLOCKED > block/io.c: introduce bdrv_subtree_drained_{begin/end}_unlocked > child_job_drained_poll: override polling condition only when in home > thread > test-bdrv-drain: ensure draining from main loop stops iothreads > > block/io.c | 48 ++++++-- > blockjob.c | 3 +- > include/block/aio-wait.h | 15 ++- > include/block/block.h | 7 ++ > tests/unit/test-bdrv-drain.c | 218 +++++++++++++++++++++++++++++++++++ > 5 files changed, 274 insertions(+), 17 deletions(-) >