> > > > > > Implement asynchronous flush for virtio pmem using work queue > > > > > > to solve the preflush ordering issue. Also, coalesce the flush > > > > > > requests when a flush is already in process. > > > > > > > > > > > > Signed-off-by: Pankaj Gupta <pankaj.gu...@ionos.com> > > > > > > --- > > > > > > drivers/nvdimm/nd_virtio.c | 72 > > > > > > ++++++++++++++++++++++++++++-------- > > > > > > drivers/nvdimm/virtio_pmem.c | 10 ++++- > > > > > > drivers/nvdimm/virtio_pmem.h | 14 +++++++ > > > > > > 3 files changed, 79 insertions(+), 17 deletions(-) > > > > > > > > > > > > diff --git a/drivers/nvdimm/nd_virtio.c b/drivers/nvdimm/nd_virtio.c > > > > > > index 10351d5b49fa..61b655b583be 100644 > > > > > > --- a/drivers/nvdimm/nd_virtio.c > > > > > > +++ b/drivers/nvdimm/nd_virtio.c > > > > > > @@ -97,29 +97,69 @@ static int virtio_pmem_flush(struct nd_region > > > > > > *nd_region) > > > > > > return err; > > > > > > }; > > > > > > > > > > > > +static void submit_async_flush(struct work_struct *ws); > > > > > > + > > > > > > /* The asynchronous flush callback function */ > > > > > > int async_pmem_flush(struct nd_region *nd_region, struct bio *bio) > > > > > > { > > > > > > - /* > > > > > > - * Create child bio for asynchronous flush and chain with > > > > > > - * parent bio. Otherwise directly call nd_region flush. > > > > > > + /* queue asynchronous flush and coalesce the flush requests > > > > > > */ > > > > > > + struct virtio_device *vdev = nd_region->provider_data; > > > > > > + struct virtio_pmem *vpmem = vdev->priv; > > > > > > + ktime_t req_start = ktime_get_boottime(); > > > > > > + > > > > > > + spin_lock_irq(&vpmem->lock); > > > > > > + /* flush requests wait until ongoing flush completes, > > > > > > + * hence coalescing all the pending requests. > > > > > > */ > > > > > > - if (bio && bio->bi_iter.bi_sector != -1) { > > > > > > - struct bio *child = bio_alloc(GFP_ATOMIC, 0); > > > > > > - > > > > > > - if (!child) > > > > > > - return -ENOMEM; > > > > > > - bio_copy_dev(child, bio); > > > > > > - child->bi_opf = REQ_PREFLUSH; > > > > > > - child->bi_iter.bi_sector = -1; > > > > > > - bio_chain(child, bio); > > > > > > - submit_bio(child); > > > > > > - return 0; > > > > > > + wait_event_lock_irq(vpmem->sb_wait, > > > > > > + !vpmem->flush_bio || > > > > > > + ktime_before(req_start, > > > > > > vpmem->prev_flush_start), > > > > > > + vpmem->lock); > > > > > > + /* new request after previous flush is completed */ > > > > > > + if (ktime_after(req_start, vpmem->prev_flush_start)) { > > > > > > + WARN_ON(vpmem->flush_bio); > > > > > > + vpmem->flush_bio = bio; > > > > > > + bio = NULL; > > > > > > + } > > > > > > > > > > Why the dance with ->prev_flush_start vs just calling queue_work() > > > > > again. queue_work() is naturally coalescing in that if the last work > > > > > request has not started execution another queue attempt will be > > > > > dropped. > > > > > > > > How parent flush request will know when corresponding flush is > > > > completed? > > > > > > The eventual bio_endio() is what signals upper layers that the flush > > > completed... > > > > > > > > > Hold on... it's been so long that I forgot that you are copying > > > md_flush_request() here. It would help immensely if that was mentioned > > > in the changelog and at a minimum have a comment in the code that this > > > was copied from md. In fact it would be extra helpful if you > > > > My bad. I only mentioned this in the cover letter. > > Yeah, sorry about that. Having come back to this after so long I just > decided to jump straight into the patches, but even if I had read that > cover I still would have given the feedback that md_flush_request() > heritage should also be noted with a comment in the code.
Sure. Thanks, Pankaj