[Qemu-devel] Merging backing file with new image
Hi all, -- *Pankaj Rawat*
[Qemu-devel] Merging backing file with new image
Hi all i am using qcow2 image format , I create a backing file and to the new image i preform some I/O qemu-img create -f qcow2 -b snap1 guestqcow2 Now I wanted to merge snap1 with guestqcow2. Is their is any command which can merge both disk into one single file -- *Pankaj Rawat*
[Qemu-devel] I/O in backing file
Hi all, I am currently using backing file.The question of my concern is regarding the I/O operation Now when we create a external snapshot in qcow2, a new file is created leaving the original file as backing file * Can any one tell,*how the I/O is performed in detail way?. Means when the new snapshot is booted which refer to the backing file as original file. *How read operation is performed? how it is decided whether the file is present in the snapshot image or the backing file.? when write operation is performed how it is decided whether the file is already present in backing file or we have to create a new one ?*
[Qemu-devel] I/O in backing file
Hi all, I am currently using backing file.The question of my concern is regarding the I/O operation Now when we create a external snapshot in qcow2, a new file is created leaving the original file as backing file * Can any one tell,*how the I/O is performed in detail way?. Means when the new snapshot is booted which refer to the backing file as original file. *How read operation is performed? how it is decided whether the file is present in the snapshot image or the backing file.? when write operation is performed how it is decided whether the file is already present in backing file or we have to create a new one ?*
[Qemu-devel] Internal Snapshot disappear after write operation
Hi all, i am using *qcow2 internal snapshot *, but it seems they are not working fine Now to explain my scenario, I created qcow2 internal snapshot *(qcow2 VM is on). * Now i list the snapshot *-bash-4.1# qemu-img snapshot -l guestqcow2* Snapshot list: IDTAG VM SIZEDATE VM CLOCK 0 1970-01-01 09:00:00 00:00:00.000 0 1970-01-01 09:00:00 00:00:00.000 0 1970-01-01 09:00:00 00:00:00.000 0 1970-01-01 09:00:00 00:00:00.000 1 on3 0 2012-03-22 13:45:26 00:00:00.000 2 on4 0 2012-03-22 13:51:28 00:00:00.000 upto this point it is fine. But now when i write something on the vm using dd command Then * -bash-4.1# qemu-img snapshot -l guestqcow2* Snapshot list: IDTAG VM SIZEDATE VM CLOCK 0 1970-01-01 09:00:00 00:00:00.000 0 1970-01-01 09:00:00 00:00:00.000 0 1970-01-01 09:00:00 00:00:00.000 0 1970-01-01 09:00:00 00:00:00.000 0 1970-01-01 09:00:00 00:00:00.000 0 1970-01-01 09:00:00 00:00:00.000 all my snapshots are gone. This is not a correct behavior can any one explain why it is happening ? and how can i fix it ? -- *Pankaj Rawat*
Re: [Qemu-devel] tuning parameters qed
Thanks for the reply . we can change the cluster_size by *qemu-img* command but how can we change* L1/L2 table size *? On Mon, Mar 12, 2012 at 3:17 PM, Stefan Hajnoczi stefa...@gmail.com wrote: On Mon, Mar 12, 2012 at 9:12 AM, PANKAJ RAWAT pankajr...@gmail.com wrote: Hi all i have a kvm machine and i use qed format as the disk image format I want to improve the I/O performance of my machine can anyone tell other then cluster_size and cache what are the tunining parameters for qed so that i can improve the performance of my VM ? Cluster and L1/L2 table size are the only QED-specific parameters. What to tune really depends on what bottleneck you are seeing. You need to do disk I/O benchmarks to find that out. If you are generally looking for good settings, I suggest leaving the QED parameters at their default and running with -drive if=virtio,cache=none,aio=native,file=path/to/image.qed with a recent QEMU and host kernel. Stefan -- *Pankaj Rawat*
[Qemu-devel] tuning parameters qed
Hi all i have a kvm machine and i use qed format as the disk image format I want to improve the I/O performance of my machine can anyone tell other then cluster_size and cache what are the tunining parameters for qed so that i can improve the performance of my VM ? -- *Pankaj Rawat*
Re: [Qemu-devel] regarding qcow2metadata
yes ofcourse here is the output *[root@t06 p]# ls -lsh* *total 1.4M* *1.4M -rw-r--r-- 1 root root 8.1G Mar 12 09:10 guest* On Wed, Mar 7, 2012 at 10:00 PM, Mulyadi Santosa mulyadi.sant...@gmail.comwrote: have you double checked by using ls -lsh command? :) -- *Pankaj Rawat*
[Qemu-devel] Live migration qed
Hi all can anyone tell how i can migrate a VM (qed disk format)? what is the process by which can we migrate the image from one KVM to another ? -- *Pankaj Rawat*
[Qemu-devel] regarding qcow2metadata
Hi all *qemu-img create -f qcow2 -o preallocation=metadata guest 8G* The above command will alllocate all the 8GB to the diskimage. means *[root@t06 p]# ls -lh* *total 1.4M* *-rw-r--r-- 1 root root 8.1G Mar 7 12:43 guest* * * is there is any way to allocate metadata to half size or predefined size ? i.e 4GB will be allocated and rest 4GB can be allocated dynamically -- *Pankaj Rawat*
[Qemu-devel] effect of cluster_size on qcow2 performance during expansion
Hi all I have been working on the qcow2 image format , In theory regarding cluster size it is written that as the size of cluster increase performance should increase. But something surprising happen The performance is degrading as the size of cluster increased from *64K* to *1M* or *2M* ( during expansion of qcow2 image) By expansion i mean that initially qcow2 image is small in size and size increases when we perform any write operation on it. Here are the reading taken through fio tool during write operation *Cluster size : 32K587Mb/sec (aggrb MB/sec) * * * *Cluster size : 64K697Mb/sec** (aggrb MB/sec)* * Cluster size : 1M 440Mb/sec* *(aggrb MB/sec)* * * * Cluster size : 2M 289Mb/sec** (aggrb MB/sec)* as in above it can bee seen that the 64k is performing better can any one tell why ? -- *Pankaj Rawat*
Re: [Qemu-devel] Cluster_size parameter issue on qcow2 image format
Thanks for the reply . i am not using a backing file. My concern is growing file system. The performance of 64K is better than 1M , 2M or 32K Is the degrade in performance is only due to allocation of large cluster during expansion of qcow2 image ? But the trend is same in case of Sequential write Random write of 1 GB data In random i can understand the sparseness of data But in sequential write I don't understand as the write is performed on sequential bases is there is any reason behind it or i am not getting it right ? On Thu, Feb 23, 2012 at 2:02 PM, Stefan Hajnoczi stefa...@gmail.com wrote: On Thu, Feb 23, 2012 at 11:01:46AM +0530, PANKAJ RAWAT wrote: I theory regarding cluster size it is written that as the size of cluster increase performance should increase. But something surprising happen The performance is degrading as the size of cluster increased from 64K to 1M ( during expansion of qcow2 image) It's not true that performance should increase by raising the cluster size, otherwise the default would be infinity. It's an algorithms/data structure trade-off. Most importantly is the relative latency between a small guest I/O request (e.g. 4 KB) and the cluster size (e.g. 64 KB). If the cluster size latency is orders of magnitude larger than a small guest I/O request, then be prepared to see extreme effects described below: * Bigger clusters decrease the frequency of metadata operations and increase metadata cache hit rates. Bigger clusters means less metadata so qcow2 performs fewer metadata operations overall. Performance boost. * Bigger clusters increase the cost of allocating a new cluster. For example, a 8 KB write to a new cluster will incur a 1 MB write to the image file (the untouched regions are filled with zeros). This can be optimized in some cases but not everywhere (e.g. reallocating a data cluster versus extending the image file size and relying on the file system to provide zeroed space). This is especially expensive when a backing file is in use because up to 1 MB of the backing file needs to be read to populate the newly allocated cluster! Performance loss. * Bigger clusters can reduce fragmentation of data on the physical disk. The file system sees fewer, bigger allocating writes and is therefore able to allocate more contiguous data - less fragmentation. Performance boost. * Bigger clusters reduce the compactness of sparse files. you use more image file space on the host file system when the cluster size is large. Space efficiency loss. Here's a scenario where a 1 MB cluster size is great compared to a large cluster size: You have a fully allocated qcow2 image, you will never need to do any allocating writes. Here's a scenario where a 1 MB cluster size is terrible compared to a small cluster size: You have an empty qcow2 file and perform 4 KB writes to the first sector of each 1 MB chunk, and there is a backing file. So it depends on the application. Stefan -- *Pankaj Rawat*
Re: [Qemu-devel] Cluster_size parameter issue on qcow2 image format
Thanks for the reply On Thu, Feb 23, 2012 at 3:46 PM, Stefan Hajnoczi stefa...@gmail.com wrote: On Thu, Feb 23, 2012 at 10:02 AM, PANKAJ RAWAT pankajr...@gmail.com wrote: Is the degrade in performance is only due to allocation of large cluster during expansion of qcow2 image ? But the trend is same in case of Sequential write Random write of 1 GB data In random i can understand the sparseness of data But in sequential write I don't understand as the write is performed on sequential bases is there is any reason behind it or i am not getting it right ? Sequential writes still require qcow2 to allocate clusters. The first write request that touches a new cluster causes qcow2 to allocate the full 1 MB. Then the next few sequential write requests overwrite in-place (these requests do not suffer allocation overhead). Now if you imagine doing 4 KB requests in the guest with 1 MB cluster size, you should find that the host is doing n * 4 KB / 1 MB - n * 4 KB extra I/O to the image file because it is zeroing each allocated cluster! Linux I/O requests tend to be 128 or 256 KB maximum with virtio-blk. So even if your request size in guest userspace is 1 MB you're probably doing multiple virtio-blk requests underneath. Therefore you are hitting the sequential allocating write pattern I described above. The exact overhead depends on your application's I/O request pattern but it's unsuprising that you experience a performance impact. Stefan -- *Pankaj Rawat*
[Qemu-devel] Backing file Explanation ?
Hi all can anoyone explain about backing file? How it is written and in which fashion ? and what role it plays when the clusters are allocated ?
[Qemu-devel] Cluster_size parameter issue on qcow2 image format
Hi all I have been working o the qcow2 image format , I theory regarding cluster size it is written that as the size of cluster increase performance should increase. But something surprising happen The performance is degrading as the size of cluster increased from 64K to 1M ( during expansion of qcow2 image) can any one tell why ?