Thanks for the reply .
i am not using a backing file.
My concern is growing file system.
The performance of 64K is better than 1M , 2M or 32K

Is the degrade in performance is only due to allocation of large cluster
during expansion of qcow2 image ?

But the trend is same in case of
Sequential write
Random write  of 1 GB data

In random i can understand the sparseness of data
But in sequential write I don't understand as the write is performed on
sequential bases

is there is any reason behind it or i am not getting it right ?

On Thu, Feb 23, 2012 at 2:02 PM, Stefan Hajnoczi <stefa...@gmail.com> wrote:

> On Thu, Feb 23, 2012 at 11:01:46AM +0530, PANKAJ RAWAT wrote:
> > I theory regarding  cluster size it is written that as the size of
> cluster
> > increase performance should increase.
> >
> > But something surprising happen The performance is degrading as the size
> of
> > cluster increased from 64K to 1M  ( during expansion of qcow2 image)
>
> It's not true that performance should increase by raising the cluster
> size, otherwise the default would be infinity.  It's an algorithms/data
> structure trade-off.
>
> Most importantly is the relative latency between a small guest I/O
> request (e.g. 4 KB) and the cluster size (e.g. 64 KB).  If the cluster
> size latency is orders of magnitude larger than a small guest I/O
> request, then be prepared to see extreme effects described below:
>
>  * Bigger clusters decrease the frequency of metadata operations and
>   increase metadata cache hit rates.  Bigger clusters means less
>   metadata so qcow2 performs fewer metadata operations overall.
>
>   Performance boost.
>
>  * Bigger clusters increase the cost of allocating a new cluster.  For
>   example, a 8 KB write to a new cluster will incur a 1 MB write to the
>   image file (the untouched regions are filled with zeros).  This can
>   be optimized in some cases but not everywhere (e.g. reallocating a
>   data cluster versus extending the image file size and relying on the
>   file system to provide zeroed space).  This is especially expensive
>   when a backing file is in use because up to 1 MB of the backing file
>   needs to be read to populate the newly allocated cluster!
>
>   Performance loss.
>
>  * Bigger clusters can reduce fragmentation of data on the physical
>   disk.  The file system sees fewer, bigger allocating writes and is
>   therefore able to allocate more contiguous data - less fragmentation.
>
>   Performance boost.
>
>  * Bigger clusters reduce the compactness of sparse files. you use more
>   image file space on the host file system when the cluster size is
>   large.
>
>   Space efficiency loss.
>
> Here's a scenario where a 1 MB cluster size is great compared to a large
> cluster size:
>
> You have a fully allocated qcow2 image, you will never need to do any
> allocating writes.
>
> Here's a scenario where a 1 MB cluster size is terrible compared to a
> small cluster size:
>
> You have an empty qcow2 file and perform 4 KB writes to the first sector
> of each 1 MB chunk, and there is a backing file.
>
> So it depends on the application.
>
> Stefan
>



-- 
*Pankaj Rawat*

Reply via email to