Kaveh, On 14 Aug 2013, at 12:28, Kaveh Razavi wrote:
> On 08/14/2013 12:53 AM, Alex Bligh wrote: >> What is this cache keyed on and how is it invalidated? Let's say a >> 2 VM on node X boot with backing file A. The first populates the cache, >> and the second utilises the cache. I then stop both VMs, delete >> the derived disks, and change the contents of the backing file. I then >> boot a VM using the changed backing file on node X and node Y. I think >> node Y is going to get the clean backing file. However, how does node >> X know not to use the cache? Would it not be a good idea to check >> (at least) the inode number and the mtime of the backing file correspond >> with values saved in the cache, and if not the same then ignore the >> cache? > > You could argue the same for normal qcow2. Start from a cow image with a > backing image, stop the VM. Start another VM, modifying the backing > image directly. Start the VM again, this time from the cow image, and > the VM can see stale data in the stored data clusters of the cow image. That's if the VM retains a qcow2 based on the backing file. I meant reboot the two VMs with a fresh qcow2 read/write file. > The idea is once a user registers an image to a cloud middleware, it is > assigned an image ID. As long as the middleware assigns a cache to the > backing image with the same ID, there is no possibility to read stale > data. If it is desired to have some sort of check at the qemu level, it > should be implemented in the qcow2 directly for all backing files and > this extension will benefit from it too. I don't agree. The penalty for a qcow2 suffering a false positive on a change to a backing file is that the VM can no longer boot. The penalty for your cache suffering a false positive is that the VM boots marginally slower. Moreover, it is expected behaviour that you CAN change a backing file if there are no r/w images based on it. Your cache changes that assumption. -- Alex Bligh