Hi all,

I have doubt about the race condition during the write operation on snapshot.

I think the problem exists in VMDK and QCOW* formats (I didn't checked the 
others). 

 

The example from the block_vmdk.c.

 

static int vmdk_write(BlockDriverState *bs, int64_t sector_num, 

                     const uint8_t *buf, int nb_sectors)

{

    BDRVVmdkState *s = bs->opaque;

    int ret, index_in_cluster, n;

    uint64_t cluster_offset;

 

    while (nb_sectors > 0) {

        index_in_cluster = sector_num & (s->cluster_sectors - 1);

        n = s->cluster_sectors - index_in_cluster;

        if (n > nb_sectors)

            n = nb_sectors;

        cluster_offset = get_cluster_offset(bs, sector_num << 9, 1);

        if (!cluster_offset)

            return -1;

        lseek(s->fd, cluster_offset + index_in_cluster * 512, SEEK_SET);

        ret = write(s->fd, buf, n * 512);

        if (ret != n * 512)

            return -1;

        nb_sectors -= n;

        sector_num += n;

        buf += n * 512;

    }

    return 0;

}

 

The get_cluster_offset(…) routine update the L2 table of the metadata and 
return the cluster_offset.

After that the vmdk_write(…) routine actually write the grain at right place.

So, we have timing hole here.

 

Assume, VM that perform write operation will be destroyed at this moment.

So, we have corrupted image (with updated L2 table, but without the grain 
itself).

 

            Regards,

                        Igor Lvovsky

 

 

 

 

 

_______________________________________________
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel

Reply via email to