Thanks for the reply

On Thu, Feb 23, 2012 at 3:46 PM, Stefan Hajnoczi <stefa...@gmail.com> wrote:

> On Thu, Feb 23, 2012 at 10:02 AM, PANKAJ RAWAT <pankajr...@gmail.com>
> wrote:
> > Is the degrade in performance is only due to allocation of large cluster
> > during expansion of qcow2 image ?
> >
> > But the trend is same in case of
> > Sequential write
> > Random write  of 1 GB data
> >
> > In random i can understand the sparseness of data
> > But in sequential write I don't understand as the write is performed on
> > sequential bases
> >
> > is there is any reason behind it or i am not getting it right ?
>
> Sequential writes still require qcow2 to allocate clusters.
>
> The first write request that touches a new cluster causes qcow2 to
> allocate the full 1 MB.  Then the next few sequential write requests
> overwrite in-place (these requests do not suffer allocation overhead).
>
> Now if you imagine doing 4 KB requests in the guest with 1 MB cluster
> size, you should find that the host is doing n * 4 KB / 1 MB - n * 4
> KB extra I/O to the image file because it is zeroing each allocated
> cluster!
>
> Linux I/O requests tend to be 128 or 256 KB maximum with virtio-blk.
> So even if your request size in guest userspace is 1 MB you're
> probably doing multiple virtio-blk requests underneath.  Therefore you
> are hitting the sequential allocating write pattern I described above.
>
> The exact overhead depends on your application's I/O request pattern
> but it's unsuprising that you experience a performance impact.
>
> Stefan
>



-- 
*Pankaj Rawat*

Reply via email to