Am 29.10.2010 16:15, schrieb Anthony Liguori: > On 10/29/2010 09:01 AM, Markus Armbruster wrote: >> Ryan Harper<ry...@us.ibm.com> writes: >>> diff --git a/block.c b/block.c >>> index a19374d..be47655 100644 >>> --- a/block.c >>> +++ b/block.c >>> @@ -1328,6 +1328,13 @@ void bdrv_set_removable(BlockDriverState *bs, int >>> removable) >>> } >>> } >>> >>> +void bdrv_unplug(BlockDriverState *bs) >>> +{ >>> + qemu_aio_flush(); >>> + bdrv_flush(bs); >>> + bdrv_close(bs); >>> +} >>> >> Stupid question: why doesn't bdrv_close() flush automatically? >> > > I don't think it's a bad idea to do that but to the extent that the > block API is designed after posix file I/O, close does not usually imply > flush.
I don't think it really resembles POSIX. More or less the only thing they have in common is that both provide open, read, write and close, which is something that probably any API for file accesses provides. The operation you're talking about here is bdrv_flush/fsync that is not implied by a POSIX close? >> And why do we have to flush here, but not before other uses of >> bdrv_close(), such as eject_device()? >> > > Good question. Kevin should also confirm, but looking at the code, I > think flush() is needed before close. If there's a pending I/O event > and you close before the I/O event is completed, you'll get a callback > for completion against a bogus BlockDriverState. > > I can't find anything in either raw-posix or the generic block layer > that would mitigate this. I'm not aware of anything either. This is what qemu_aio_flush would do. It seems reasonable to me to call both qemu_aio_flush and bdrv_flush in bdrv_close. We probably don't really need to call bdrv_flush to operate correctly, but it can't hurt and bdrv_close shouldn't happen that often anyway. Kevin