Just for clarity, I was not writing to tmpfs. I was READING from tmpfs. I
was writing to a path named 'sdb' (as you see in the prompt) which is a
btrfs on an SSD Drive. I don't have an HDD to test on though.

On Mon, May 11, 2015 at 12:02 AM Stefan Weil <s...@weilnetz.de> wrote:

> Am 10.05.2015 um 17:01 schrieb Paolo Bonzini:
> >
> > On 09/05/2015 05:54, phoeagon wrote:
> >> zheq-PC sdb # time ~/qemu-sync-test/bin/qemu-img convert -f raw -t
> writeback -O vdi /run/shm/rand 1.vdi
> >>
> >> real0m8.678s
> >> user0m0.169s
> >> sys0m0.500s
> >>
> >> zheq-PC sdb # time qemu-img convert -f raw -t writeback -O vdi
> /run/shm/rand 1.vdi
> >> real0m4.320s
> >> user0m0.148s
> >> sys0m0.471s
> > This means that 3.83 seconds are spent when bdrv_close() calls
> > bdrv_flush().  That's the only difference between writeback
> > and unsafe in qemu-img convert.
> >
> > The remaining part of the time (4.85 seconds instead of 0.49
> > seconds) means that, at least on your hardware, sequential writes
> > to unallocated space become 10 times slower with your patch.
> >
> > Since the default qemu-img convert case isn't slowed down, I
> > would think that correctness trumps performance.  Nevertheless,
> > it's a huge difference.
> >
> > Paolo
>
> I doubt that the convert case isn't slowed down.
>
> Writing to a tmpfs as it was obviously done for the test is not a
> typical use case.
> With real hard disks I expect a significant slowdown.
>
> Stefan
>
>

Reply via email to