Am 12.02.2021 um 10:14 hat Max Reitz geschrieben:
> On 11.02.21 21:38, Vladimir Sementsov-Ogievskiy wrote:
> > 11.02.2021 20:22, Max Reitz wrote:
> > > We have repeatedly received reports that SEEK_HOLE and SEEK_DATA are
> > > slow on certain filesystems and/or under certain circumstances.  That is
> > > why we generally try to avoid it (which is why bdrv_co_block_status()
> > > has the @want_zero parameter, and which is why qcow2 has a metadata
> > > preallocation detection, so we do not fall through to the protocol layer
> > > to discover which blocks are zero, unless that is really necessary
> > > (i.e., for metadata-preallocated images)).
> > > 
> > > In addition to those measures, we can also try to speed up zero
> > > detection by letting file-posix cache some hole location information,
> > > namely where the next hole after the most recently queried offset is.
> > > This helps especially for images that are (nearly) fully allocated,
> > > which is coincidentally also the case where querying for zero
> > > information cannot gain us much.
> > > 
> > > Note that this of course only works so long as we have no concurrent
> > > writers to the image, which is the case when the WRITE capability is not
> > > shared.
> > > 
> > > Alternatively (or perhaps as an improvement in the future), we could let
> > > file-posix keep track of what it knows is zero and what it knows is
> > > non-zero with bitmaps, which would help images that actually have a
> > > significant number of holes (where this implementation here cannot do
> > > much).  But for such images, SEEK_HOLE/DATA are generally faster (they
> > > do not need to seek through the whole file), and the performance lost by
> > > querying the block status does not feel as bad because it is outweighed
> > > by the performance that can be saved by special-cases zeroed areas, so
> > > focussing on images that are (nearly) fully allocated is more important.
> > > 
> > > Signed-off-by: Max Reitz <mre...@redhat.com>
> > 
> > I'll look at it tomorrow... Just wanted to note that something similar
> > was proposed by Kevin some time ago:
> > 
> > <20190124141731.21509-1-kw...@redhat.com>
> > https://lists.gnu.org/archive/html/qemu-devel/2019-01/msg06271.html
> 
> Interesting.  The reasoning that it doesn’t matter whether anyone
> writes to the assumed-data regions makes sense.
> 
> I can’t see a real reason why it was kind of forgotten, apparently...

After qcow2 stopped recursively querying the file-posix layer, the
relevant case under discussion was fixed anyway, so it didn't have the
highest priority any more...

I think the open question (and possibly work) in the old thread was
whether this should be moved out of file-posix into the generic block
layer.

With your patch, I guess the other open question is whether we want to
try and cache holes anyway. I assume that in the common case, you may
have many consecutive data extents, but probably rarely many holes (I
guess you can have more than one if some areas are unallocated and
others are allocated, but unwritten?) Then it's probably not worth
caching holes.

Kevin


Reply via email to