I'm not familiar with the VirtualBox code base, but looks like "
vdIOIntWriteMeta" can work both synchronously & asynchronously, and
"vdiBlockAllocUpdate" looks async to me. Frankly, skimming through the code
for 5 min doesn't enlighten me too much on its detailed implementation, but
looks like at
Am 10.05.2015 um 18:10 schrieb Paolo Bonzini:
On 10/05/2015 18:02, Stefan Weil wrote:
Since the default qemu-img convert case isn't slowed down, I
would think that correctness trumps performance. Nevertheless,
it's a huge difference.
I doubt that the convert case isn't slowed down.
The *defau
Am 10.05.2015 um 17:01 schrieb Paolo Bonzini:
On 09/05/2015 05:54, phoeagon wrote:
zheq-PC sdb # time ~/qemu-sync-test/bin/qemu-img convert -f raw -t writeback -O
vdi /run/shm/rand 1.vdi
real0m8.678s
user0m0.169s
sys0m0.500s
zheq-PC sdb # time qemu-img convert -f raw -t writeback -O vdi /run
On 10/05/2015 18:02, Stefan Weil wrote:
>> Since the default qemu-img convert case isn't slowed down, I
>> would think that correctness trumps performance. Nevertheless,
>> it's a huge difference.
>
> I doubt that the convert case isn't slowed down.
The *default* convert case isn't slowed down
Just for clarity, I was not writing to tmpfs. I was READING from tmpfs. I
was writing to a path named 'sdb' (as you see in the prompt) which is a
btrfs on an SSD Drive. I don't have an HDD to test on though.
On Mon, May 11, 2015 at 12:02 AM Stefan Weil wrote:
> Am 10.05.2015 um 17:01 schrieb Pao
On 09/05/2015 05:54, phoeagon wrote:
> zheq-PC sdb # time ~/qemu-sync-test/bin/qemu-img convert -f raw -t writeback
> -O vdi /run/shm/rand 1.vdi
>
> real0m8.678s
> user0m0.169s
> sys0m0.500s
>
> zheq-PC sdb # time qemu-img convert -f raw -t writeback -O vdi /run/shm/rand
> 1.vdi
> real0m4.320
BTW, how do you usually measure the time to install a Linux distro within?
Most distros ISOs do NOT have unattended installation ISOs in place. (True
I can bake my own ISOs for this...) But do you have any ISOs made ready for
this purpose?
On Sat, May 9, 2015 at 11:54 AM phoeagon wrote:
> Thanks
Thanks. Dbench does not logically allocate new disk space all the time,
because it's a FS level benchmark that creates file and deletes them.
Therefore it also depends on the guest FS, say, a btrfs guest FS allocates
about 1.8x space of that from EXT4, due to its COW nature. It does cause
the FS to
Am 08.05.2015 um 15:55 schrieb Kevin Wolf:
Am 08.05.2015 um 15:14 hat Max Reitz geschrieben:
On 07.05.2015 17:16, Zhe Qiu wrote:
In reference to b0ad5a45...078a458e, metadata writes to
qcow2/cow/qcow/vpc/vmdk are all synced prior to succeeding writes.
Only when write is successful that bdrv_fl
Then I would guess the same reason should apply to VMDK/VPC as well...
Their metadata update protocol is not atomic either, and a sync after
metadata update doesn't fix the whole thing theoretically either. Yet the
metadata sync patches as old as since 2010 are still there. It should also
be a perf
Am 08.05.2015 um 15:14 hat Max Reitz geschrieben:
> On 07.05.2015 17:16, Zhe Qiu wrote:
> >In reference to b0ad5a45...078a458e, metadata writes to
> >qcow2/cow/qcow/vpc/vmdk are all synced prior to succeeding writes.
> >
> >Only when write is successful that bdrv_flush is called.
> >
> >Signed-off-
On 07.05.2015 17:16, Zhe Qiu wrote:
In reference to b0ad5a45...078a458e, metadata writes to
qcow2/cow/qcow/vpc/vmdk are all synced prior to succeeding writes.
Only when write is successful that bdrv_flush is called.
Signed-off-by: Zhe Qiu
---
block/vdi.c | 3 +++
1 file changed, 3 insertion
In reference to b0ad5a45...078a458e, metadata writes to
qcow2/cow/qcow/vpc/vmdk are all synced prior to succeeding writes.
Only when write is successful that bdrv_flush is called.
Signed-off-by: Zhe Qiu
---
block/vdi.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/block/vdi.c b/block/
13 matches
Mail list logo