On 24-08-2021 11:37, Kevin Wolf wrote:
[ Cc: qemu-block ]
Am 11.08.2021 um 13:36 hat Christopher Pereira geschrieben:
Hi,
I'm reading a directory with 5.000.000 files (2,4 GB) inside a guest using
"find | grep -c".
On the host I saw high write IO (40 MB/s !) during over 1 hour
Hi,
I'm reading a directory with 5.000.000 files (2,4 GB) inside a guest
using "find | grep -c".
On the host I saw high write IO (40 MB/s !) during over 1 hour using
virt-top.
I later repeated the read-only operation inside the guest and no
additional data was written on the host. The
On 13-Mar-17 08:21, Kevin Wolf wrote:
I think the state of the art is to give the VMs enough memory rather
Thanks Kevin. This was the answer I expected, so here follows my second
question:
By increasing virtual RAM, I'm afraid that the guest OS may detect and
use more memory, causing the
also
be handled by the underlying linux filesystem of course. Can we tell
QEMU to use the linux swap disk for creating a guest swap disk? What's
the state of the art regarding swap disks virtualization?
Best regards,
--
*J. Christopher Pereira*
IMATRONIX S.A.
www.imatronix.com
roy data?
If so, is there any difference between shutting down and just
saving/restoring the VM?
Maybe save/restore keeps a cache?
Best regards,
Christopher.
On 19-Dec-16 13:24, Christopher Pereira wrote:
Hi Eric,
Thanks for your great answer.
On 19-Dec-16 12:48, Eric Blake wrote:
Then we
Hi Eric,
Thanks for your great answer.
On 19-Dec-16 12:48, Eric Blake wrote:
Then we do the rebase while the VM is suspended to make sure the image
files are reopened.
That part is where you are liable to break things. Qemu does NOT have a
graceful way to reopen the backing chain, so
here.
We are not using block-commit since we want to have more control (keep
the base snapshot unmodified, use compression, etc).
Best regards,
Christopher
On 19-Dec-16 10:55, Stefan Hajnoczi wrote:
On Mon, Dec 19, 2016 at 09:07:43AM +0800, Fam Zheng wrote:
On Sun, 12/18 20:52, Christopher Pere
Hi,
We are doing a "qemu-img convert" operation (qcow2, with compression) to
shorten the backing-chain (in the middle of the backing-chain).
In order to force qemu to reopen files, we do a save and restore operation.
Is there a faster way to reopen image files using virsh or qemu?
Best
On 12-Jul-16 10:07, Eric Blake wrote:
On 07/11/2016 09:42 PM, Christopher Pereira wrote:
Hi,
Let's say we have this chain:
base <--- sn1 <--- sn2 <--- active
Can we shutdown the VM and "qemu-img convert sn2 -O qcow2 sn2_new" and
rebase active to sn2_new like
ll into an
non-active snapshot already supported and stable?
I have tested this qemu-img convert approach and it seems to work, but I
would like to ask someone more familiar with qcow2 internals and prevent
data corruption.
Best regards,
Christopher Pereira
Hi Kevin,
I understand. In this case (where the gluster process was killed or
crashed) I guess the best option would be to poweroff and restart the
VM, which can be done client-side (ovirt + libvirt)
Please mark as Won't fix.
Thanks.
--
You received this bug notification because you are a
Public bug reported:
oVirt uses libvirt to run QEMU.
Images are passed to QEMU as files, not file descriptors.
When running images from a GlusterFS, the file descriptors may get invalidated
because of network problems or the glusterfs process being restarted.
In this case, the VM goes into
On 06-03-2015 14:19, Stefan Hajnoczi wrote:
On Wed, Feb 25, 2015 at 09:32:18PM -0300, Christopher Pereira wrote:
Does qemu reopen files on a 'cont' command?
When working with images on a gluster volume, file descriptors may get into
a bad state because of network timeouts, remounting a share
Hi,
Does qemu reopen files on a 'cont' command?
When working with images on a gluster volume, file descriptors may get
into a bad state because of network timeouts, remounting a share,
etc...and reinitializing file descriptors may be usefull to get paused
VMs up again.
Related BZ:
14 matches
Mail list logo