On Wed, 08/14 13:28, Kaveh Razavi wrote: > On 08/14/2013 12:53 AM, Alex Bligh wrote: > > What is this cache keyed on and how is it invalidated? Let's say a > > 2 VM on node X boot with backing file A. The first populates the cache, > > and the second utilises the cache. I then stop both VMs, delete > > the derived disks, and change the contents of the backing file. I then > > boot a VM using the changed backing file on node X and node Y. I think > > node Y is going to get the clean backing file. However, how does node > > X know not to use the cache? Would it not be a good idea to check > > (at least) the inode number and the mtime of the backing file correspond > > with values saved in the cache, and if not the same then ignore the > > cache? > > You could argue the same for normal qcow2. Start from a cow image with a > backing image, stop the VM. Start another VM, modifying the backing > image directly. Start the VM again, this time from the cow image, and > the VM can see stale data in the stored data clusters of the cow image. > > The idea is once a user registers an image to a cloud middleware, it is > assigned an image ID. As long as the middleware assigns a cache to the > backing image with the same ID, there is no possibility to read stale > data. If it is desired to have some sort of check at the qemu level, it > should be implemented in the qcow2 directly for all backing files and > this extension will benefit from it too. > Yes, this one sounds good to have. VMDK and VHDX have this kind of backing file status validation.
Thanks. Fam