On Thu 22 Oct 2020 10:56:46 PM CEST, Yoonho Park wrote:
> I am still seeing the performance degradation, but I did find something
> interesting (and promising) with qemu 5.1.50. Enabling the subcluster
> allocation support in qemu 5.1.50 (extended_l2=on) eliminates the
> performance degradation of
I am still seeing the performance degradation, but I did find something
interesting (and promising) with qemu 5.1.50. Enabling the subcluster
allocation support in qemu 5.1.50 (extended_l2=on) eliminates the
performance degradation of adding an overlay. Without subcluster allocation
enabled, I
On Thu 27 Aug 2020 06:29:15 PM CEST, Yoonho Park wrote:
> Below is the data with the cache disabled ("virsh attach-disk ... --cache
> none"). I added the previous data for reference. Overall, random read
> performance was not affected significantly. This makes sense because a
> cache is probably
Below is the data with the cache disabled ("virsh attach-disk ... --cache
none"). I added the previous data for reference. Overall, random read
performance was not affected significantly. This makes sense because a
cache is probably not going to help random read performance much. BTW how
big the
I used strace to collect the writes, and as far as I can tell they are
aligned to the cluster size (64K). Below are some examples...
1520414 pwritev(35,
[{iov_base="\5\277z\314\24\305\177\r\340\327\f:e\222\10\33\374\232Q;FuN\t\0\0\325\275\0\0\0\0"...,
iov_len=4096},
Great. I will give your patches a try. Also, your workaround suggestion and
a discussion with a colleague gave me another idea for an experiment. Is it
possible that some of the overhead I am seeing is from the operations
necessary to increase the size of the overlay? Would it make sense to use
I create the attached disk with the following commands. The "qemu-img info"
is to double check the cluster size. I am running the same experiments now
with "--cache none" attached to the "virsh attach-disk". Is this sufficient
to avoid the host page cache?
qemu-img create -f qcow2
On Wed 26 Aug 2020 03:18:32 PM CEST, Kevin Wolf wrote:
>> My understanding is that writing 4K blocks requires a
>> read-modify-write because you must fetch a complete cluster from
>> deeper in the overlay chain before writing to the active
>> overlay. However, this does not explain the drop in
On Wed, Aug 26, 2020 at 15:30:03 +0200, Peter Krempa wrote:
> On Wed, Aug 26, 2020 at 15:18:32 +0200, Kevin Wolf wrote:
> > Am 26.08.2020 um 02:46 hat Yoonho Park geschrieben:
> > > Another issue I hit is that I cannot set or change the cluster size of
> > > overlays. Is this possible with "virsh
On Wed, Aug 26, 2020 at 15:18:32 +0200, Kevin Wolf wrote:
> Am 26.08.2020 um 02:46 hat Yoonho Park geschrieben:
> > Another issue I hit is that I cannot set or change the cluster size of
> > overlays. Is this possible with "virsh snapshot-create-as"?
>
> That's a libvirt question. Peter, can you
Am 26.08.2020 um 02:46 hat Yoonho Park geschrieben:
> I have been measuring the performance of qcow2 overlays, and I am hoping to
> get some help in understanding the data I collected. In my experiments, I
> created a VM and attached a 16G qcow2 disk to it using "qemu-img create"
> and "virsh
I have been measuring the performance of qcow2 overlays, and I am hoping to
get some help in understanding the data I collected. In my experiments, I
created a VM and attached a 16G qcow2 disk to it using "qemu-img create"
and "virsh attach-disk". I use fio to fill it. I create some number of
12 matches
Mail list logo