@Jinank I have not started working on this at all, so please go ahead!
Let me know if I can help with testing or anything, we make quite
extensive use of nbd and qcow2 images internally.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
@~quentin.casasnovas please report this as new feature request, instead
of adding comment to this one.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/601946
Title:
[Feature request] qemu-img multi-t
That was also my feeling, so nice to get a confirmation!
Another related thing would be to allow qemu-nbd to write compressed
blocks its backing image - today if you use a qcow2 with compression,
any block which is written to gets uncompressed in the resulting image
and you need to recompress the
The fact that it's now a coroutine_fn doesn't change much, if anything
it makes it simpler to handle multiple writes in parallel.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/601946
Title:
[Featur
It looks like qcow2_write_compressed() has been removed and turned into
a qemu co-routine in qemu 2.8.0 (released in December 2017) to support
live compressed back-ups. Any pointers to start working on this? We
have servers with 128 CPUs and it's very sad to see them compress on a
single CPU and
** Changed in: qemu
Importance: Undecided => Wishlist
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/601946
Title:
[Feature request] qemu-img multi-threaded compressed image conversion
Status in
qcow2_write_compressed in block/qcow2.c would need to be changed.
Currently it seems to need bigger changes as it always does compress+write for
one block.
Not sure, how well it would handle multiple writes in parallel, so the safest
would be to avoid that and just wait for the previous writer t
I'd like to note, that I use qemu-img to backup snapshots of images.
This works fine, it's just so slow. Of my 24 cores only 1 is used to
compress the image.
It could be so much faster.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
The compression in this case is certainly chunked already, otherwise you
couldn't implement a pseudo block device without reading the entire
stream to read the last block! As the data in the new disk is
necessarily chunk compressed, parallelisation is perfect feasible, it's
just a question of the
There're also projects like http://compression.ca/pbzip2/ . We'll be
facing more and more cores per cpu, so we should use these techniques.
--
[Feature request] qemu-img multi-threaded compressed image conversion
https://bugs.launchpad.net/bugs/601946
You received this bug notification because
1. during benchmark I used iotop and just top. qemu-img is eating all my cpu
(3.07 Ghz) and disk streaming was at low speeds.
2. Writing on disk in ext4 is cached very strongly, so writing in 4 streams is
not the problem.
3. For example, 7z give huge speed increase in when compressing in multiple
Hi,
The problem is that it is more than just the compression that is the
problem, with modern cpus disk speed is a problem, and compression is
often stream based. For now there isn't enough valid data that this
qualifies as a bug/rfe.
If you decide to try and implement it, and provide data showin
12 matches
Mail list logo