Ccing live migration developers who should be interested in this work,
On Mon, 12 Nov 2012 21:10:32 -0200
Marcelo Tosatti mtosa...@redhat.com wrote:
On Mon, Nov 05, 2012 at 05:59:26PM +0800, Xiao Guangrong wrote:
Do not drop large spte until it can be insteaded by small pages so that
the
On Tue, 19 Jun 2012 09:01:36 -0500
Anthony Liguori anth...@codemonkey.ws wrote:
I'm not at all convinced that postcopy is a good idea. There needs a clear
expression of what the value proposition is that's backed by benchmarks.
Those
benchmarks need to include latency measurements of
On Sat, 28 Apr 2012 19:05:44 +0900
Takuya Yoshikawa takuya.yoshik...@gmail.com wrote:
1. Problem
During live migration, if the guest tries to take mmu_lock at the same
time as GET_DIRTY_LOG, which is called periodically by QEMU, it may be
forced to wait long time
On Wed, 02 May 2012 14:33:55 +0300
Avi Kivity a...@redhat.com wrote:
=
perf top -t ${QEMU_TID}
=
51.52% qemu-system-x86_64 [.] memory_region_get_dirty
16.73% qemu-system-x86_64 [.] ram_save_remaining
Avi Kivity a...@redhat.com wrote:
Slot searching is quite fast since there's a small number of slots, and
we sort the larger ones to be in the front, so positive lookups are fast.
We cache negative lookups in the shadow page tables (an spte can be
either not mapped, mapped to
Hope to get comments from live migration developers,
Anthony Liguori anth...@codemonkey.ws wrote:
Guest memory management
---
Instead of managing each memory slot individually, a single API will be
provided that replaces the entire guest physical memory map
(2012/01/13 10:09), Benoit Hudzia wrote:
Hi,
Sorry to jump to hijack the thread like that , however i would like
to just to inform you that we recently achieve a milestone out of the
research project I'm leading. We enhanced KVM in order to deliver
post copy live migration using RDMA at
(2012/01/01 18:52), Dor Laor wrote:
But we really need to think hard about whether this is the right thing
to take into the tree. I worry a lot about the fact that we don't test
pre-copy migration nearly enough and adding a second form just
introduces more things to test.
It is an issue but it
Avi Kivity a...@redhat.com wrote:
That's true. But some applications do require low latency, and the
current code can impose a lot of time with the mmu spinlock held.
The total amount of work actually increases slightly, from O(N) to O(N
log N), but since the tree is so wide, the overhead
CCing qemu devel, Juan,
(2011/11/29 23:03), Avi Kivity wrote:
On 11/29/2011 02:01 PM, Avi Kivity wrote:
On 11/29/2011 01:56 PM, Xiao Guangrong wrote:
On 11/29/2011 07:20 PM, Avi Kivity wrote:
We used to have a bitmap in a shadow page with a bit set for every slot
pointed to by the page.
(2011/11/30 14:02), Takuya Yoshikawa wrote:
IIUC, even though O(1) is O(1) at the timing of GET DIRTY LOG, it needs O(N)
write
protections with respect to the total number of dirty pages: distributed, but
actually each page fault, which should be logged, does some write protection?
Sorry
Adding qemu-devel to Cc.
(2011/11/14 21:39), Avi Kivity wrote:
On 11/14/2011 12:56 PM, Takuya Yoshikawa wrote:
(2011/11/14 19:25), Avi Kivity wrote:
On 11/14/2011 11:20 AM, Takuya Yoshikawa wrote:
This is a revised version of my previous work. I hope that
the patches are more self
Adding qemu-devel ML to CC.
Your question should have been sent to qemu-devel ML because the logic
is implemented in QEMU, not KVM.
(2011/11/11 1:35), Oliver Hookins wrote:
Hi,
I am performing some benchmarks on KVM migration on two different types of VM.
One has 4GB RAM and the other 32GB.
Vivek Goyal vgo...@redhat.com wrote:
So you are using both RHEL 6.0 in both host and guest kernel? Can you
reproduce the same issue with upstream kernels? How easily/frequently
you can reproduce this with RHEL6.0 host.
Guests were CentOS6.0.
I have only RHEL6.0 and RHEL6.1 test results now.
--
Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
What kind of mmio should be traced here, device or CPU originated? Or both?
Jan
To let Kemari replay outputs upon failover, tracing CPU originated
mmio (specifically write requests) should be enough.
IIUC, we can reproduce device originated mmio as a result of cpu
originated
(2010/11/30 1:41), Dor Laor wrote:
Is this a fair summary: any device that supports live migration workw
under Kemari?
It might be fair summary but practically we barely have live migration working
w/o Kemari. In addition, last I checked Kemari needs additional hooks and it
will be too hard
Thanks for the answers Avi, Juan,
Some FYI, (not about the bottleneck)
On Wed, 01 Dec 2010 14:35:57 +0200
Avi Kivity a...@redhat.com wrote:
- how many dirty pages do we have to care?
default values and assuming 1Gigabit ethernet for ourselves ~9.5MB of
dirty pages to have only 30ms
in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
On Wed, 01 Dec 2010 02:52:08 +0100
Juan Quintela quint...@redhat.com wrote:
Since we are planning to do some profiling for these, taking into account
Kemari, can you please share these information?
If you see the 0/10 email with this setup, you can see how much time are
we spending on
(2010/04/22 19:35), Yoshiaki Tamura wrote:
A trivial one would we to :
- do X online snapshots/sec
I currently don't have good numbers that I can share right now.
Snapshots/sec depends on what kind of workload is running, and if the
guest was almost idle, there will be no snapshots in 5sec.
21 matches
Mail list logo