Re: [Qemu-devel] [PATCH RFC 0/3] Checkpoint-assisted migration proposal

2015-10-05 Thread Thomas Knauth
Hi Amit, On Tue, Sep 15, 2015 at 12:39 PM, Amit Shah wrote: > Could you please include a file in the docs/ directory that documents > how this works, so it's easier to comment on the general idea? sure, we will add this. > From 'checkpointing', I was afraid this was going to use some > checkpoi

[Qemu-devel] feature proposal: checkpoint-assisted migration

2015-04-14 Thread Thomas Knauth
Dear list, my research revolves around cloud computing, virtual machines and migration. In this context I came across the following: a recent study by IBM indicates that a typical VM only migrates between a small set of physical servers; often just two. The potential for optimization is clear. By

Re: [Qemu-devel] Capture SIGSEGV to track pc.ram page access

2013-10-08 Thread Thomas Knauth
On Fri, Sep 27, 2013 at 12:50 PM, Stefan Hajnoczi wrote: > If you want to continue with the original SIGSEGV handler approach, > check signals masks for the vcpu threads. Make sure the signal actually > gets delivered to a thread that has the signal unblocked and a signal > handler installed. I'

Re: [Qemu-devel] Capture SIGSEGV to track pc.ram page access

2013-09-26 Thread Thomas Knauth
As far as I understand the dirty logging infrastructure will only record writes. I want to track reads as well. A better way to express what I would like to do is trace all guest physical addresses that are accessed. Again, I am unsure whether qemu supports this out-of-the box and where I would ha

[Qemu-devel] Capture SIGSEGV to track pc.ram page access

2013-09-01 Thread Thomas Knauth
Dear all, I'm trying to use a signal handler to catch SIGSEGV's in qemu. I want(ed) to use them to track which memory pages are accessed by the guest (only accesses to the pc.ram). After some hours of fruitless mucking around, I've come to the conclusion that it is not as straightforward as with "

[Qemu-devel] lazy instance resume

2013-04-04 Thread Thomas Knauth
Dear all, I'm interested in fast instance resume times, i.e., a migration where the source is a file on disk. The basic idea is that we don't need to read the entire memory dump from disk to kick off execution. This is similar to what is, for example, done with post-copy live migration. Resume fro

[Qemu-devel] virsh resume/qemu loadvm (from disk) latency

2013-03-25 Thread Thomas Knauth
Dear all, why does the resume time depend on the maximum amount of memory the instance is configured with? For a dump size of 500 MB resuming the instance takes 2/3/5 seconds for a virtual machine configured with 1/2/4 GB of RAM. I measure the time it takes for the 'virsh restore ' command to retu

Re: [Qemu-devel] kvm suspend performance

2013-03-23 Thread Thomas Knauth
Hi Eric, thanks for the reply. This indeed solved my issue. Suspending is much faster without the artificial throttle. On a related note: I'm curious about the baseline resume latency. It takes about 5 seconds to resume an instance with a tiny amount of state (500 MB dump size). The data is all i

Re: [Qemu-devel] kvm suspend performance

2013-03-20 Thread Thomas Knauth
Hi Stefan, thanks for taking the time to reply. On Wed, Mar 20, 2013 at 9:11 AM, Stefan Hajnoczi wrote: > Which QEMU or libvirt command are you using to suspend the guest to > disk? > virsh save > Why do you say it is CPU-bound? Did you use a tool like vmstat or > simply because it does 3

[Qemu-devel] kvm suspend performance

2013-03-19 Thread Thomas Knauth
Dear all, lately I've been playing around with qemu's/kvm's suspend (to disk) and resume. My initial expectation was that both operations are I/O bound. So it surprised me to see that suspend to disk seems to be CPU-bound. Suspending a VM with 1.5 GB memory takes 55 seconds. This works out to less