On 08/01/2017 09:18 AM, Anton Nefedov wrote: > Signed-off-by: Anton Nefedov <anton.nefe...@virtuozzo.com> > --- > block/blkverify.c | 9 +++++++++ > 1 file changed, 9 insertions(+)
Basically, blkverify supports a flag if BOTH of its underlying files also support the flag; if either side can't handle the flag, then we fall back to emulation for both sides. With more overhead, we COULD state that we support both bits if at least one of the two underlying BDS supports the bit, and then emulate support for the bit on the second BDS where it was lacking, so that at least the first BDS doesn't suffer from the penalties of the fallbacks. But that means duplicating the block layer fallback code in blkverify, which is already something that we don't necessarily expect high performance from. For FUA, failure to implement the bit merely means that we have more device-wide flush calls (instead of per-transaction mini-flushes), but the end data should be the same. But for MAY_UNMAP, I'm worried that we may have situations where a plain BDS will create holes, while running the same device paired through blkverify will fall back to slower explicit zeroes. I'm wondering whether this will bite us, if we have scenarios where the mere fact of trying to verify block device behavior changes what behavior we are even verifying. Thus, while I think the code change _looks_ okay, I'm not sure if it is correct design-wise, nor whether it is 2.10 material. > + bs->supported_write_flags = BDRV_REQ_FUA & > + bs->file->bs->supported_write_flags & > + s->test_file->bs->supported_write_flags; > + > + bs->supported_zero_flags = > + (BDRV_REQ_FUA | BDRV_REQ_MAY_UNMAP) & > + bs->file->bs->supported_zero_flags & > + s->test_file->bs->supported_zero_flags; > + > ret = 0; > fail: > qemu_opts_del(opts); > -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org
signature.asc
Description: OpenPGP digital signature