Yes we plan to release patch as soon as we cleaned up the code and we
get the green light from our company ( and sadly it can take month on
that point..)
On 13 January 2012 01:31, Takuya Yoshikawa
yoshikawa.tak...@oss.ntt.co.jp wrote:
(2012/01/13 10:09), Benoit Hudzia wrote:
Hi,
Sorry to
On 13 January 2012 02:03, Isaku Yamahata yamah...@valinux.co.jp wrote:
Very interesting. We can cooperate for better (postcopy) live migration.
The code doesn't seem available yet, I'm eager for it.
On Fri, Jan 13, 2012 at 01:09:30AM +, Benoit Hudzia wrote:
Hi,
Sorry to jump to hijack
On 13 January 2012 02:15, Isaku Yamahata yamah...@valinux.co.jp wrote:
One more question.
Does your architecture/implementation (in theory) allow KVM memory
features like swap, KSM, THP?
* Swap: Yes we support swap to disk ( the page is pulled from swap
before being send over), swap process do
On 01/03/2012 04:25 PM, Andrea Arcangeli wrote:
So the problem is if we do it in
userland with the current functionality you'll run out of VMAs and
slowdown performance too much.
But all you need is the ability to map single pages in the address
space.
Would this also let
On 01/04/2012 05:03 AM, Isaku Yamahata wrote:
Yes, it's quite doable in user space(qemu) with a kernel-enhancement.
And it would be easy to convert a separated daemon process into a thread
in qemu.
I think it should be done out side of qemu process for some reasons.
(I just repeat same
(2012/01/13 10:09), Benoit Hudzia wrote:
Hi,
Sorry to jump to hijack the thread like that , however i would like
to just to inform you that we recently achieve a milestone out of the
research project I'm leading. We enhanced KVM in order to deliver
post copy live migration using RDMA at
Very interesting. We can cooperate for better (postcopy) live migration.
The code doesn't seem available yet, I'm eager for it.
On Fri, Jan 13, 2012 at 01:09:30AM +, Benoit Hudzia wrote:
Hi,
Sorry to jump to hijack the thread like that , however i would like
to just to inform you that
On Thu, Jan 12, 2012 at 03:57:47PM +0200, Avi Kivity wrote:
On 01/03/2012 04:25 PM, Andrea Arcangeli wrote:
So the problem is if we do it in
userland with the current functionality you'll run out of VMAs and
slowdown performance too much.
But all you need is the ability to
On Thu, Jan 12, 2012 at 03:59:59PM +0200, Avi Kivity wrote:
On 01/04/2012 05:03 AM, Isaku Yamahata wrote:
Yes, it's quite doable in user space(qemu) with a kernel-enhancement.
And it would be easy to convert a separated daemon process into a thread
in qemu.
I think it should be done out
One more question.
Does your architecture/implementation (in theory) allow KVM memory
features like swap, KSM, THP?
On Fri, Jan 13, 2012 at 11:03:23AM +0900, Isaku Yamahata wrote:
Very interesting. We can cooperate for better (postcopy) live migration.
The code doesn't seem available yet, I'm
Hi,
Sorry to jump to hijack the thread like that , however i would like
to just to inform you that we recently achieve a milestone out of the
research project I'm leading. We enhanced KVM in order to deliver
post copy live migration using RDMA at kernel level.
Few point on the architecture of
On Mon, Jan 02, 2012 at 06:55:18PM +0100, Paolo Bonzini wrote:
On 01/02/2012 06:05 PM, Andrea Arcangeli wrote:
On Thu, Dec 29, 2011 at 06:01:45PM +0200, Avi Kivity wrote:
On 12/29/2011 06:00 PM, Avi Kivity wrote:
The NFS client has exactly the same issue, if you mount it with the intr
On Mon, Jan 02, 2012 at 06:05:51PM +0100, Andrea Arcangeli wrote:
On Thu, Dec 29, 2011 at 06:01:45PM +0200, Avi Kivity wrote:
On 12/29/2011 06:00 PM, Avi Kivity wrote:
The NFS client has exactly the same issue, if you mount it with the intr
option. In fact you could use the NFS client as
On Thu, Dec 29, 2011 at 06:01:45PM +0200, Avi Kivity wrote:
On 12/29/2011 06:00 PM, Avi Kivity wrote:
The NFS client has exactly the same issue, if you mount it with the intr
option. In fact you could use the NFS client as a trivial umem/cuse
prototype.
Actually, NFS can return SIGBUS,
On 01/02/2012 06:05 PM, Andrea Arcangeli wrote:
On Thu, Dec 29, 2011 at 06:01:45PM +0200, Avi Kivity wrote:
On 12/29/2011 06:00 PM, Avi Kivity wrote:
The NFS client has exactly the same issue, if you mount it with the intr
option. In fact you could use the NFS client as a trivial umem/cuse
On 12/29/2011 02:39 PM, Isaku Yamahata wrote:
ioctl commands:
UMEM_DEV_CRATE_UMEM: create umem device for qemu
UMEM_DEV_LIST: list created umem devices
UMEM_DEV_REATTACH: re-attach the created umem device
UMEM_DEV_LIST and UMEM_DEV_REATTACH are used when
On 12/29/2011 04:49 PM, Isaku Yamahata wrote:
Great, then we agreed with list/reattach basically.
(Maybe identity scheme needs reconsideration.)
I guess we miscommunicated. Why is reattach needed? If you have the
fd, nothing else is needed.
What if malicious process close the fd
On Thu, Dec 29, 2011 at 04:55:11PM +0200, Avi Kivity wrote:
On 12/29/2011 04:49 PM, Isaku Yamahata wrote:
Great, then we agreed with list/reattach basically.
(Maybe identity scheme needs reconsideration.)
I guess we miscommunicated. Why is reattach needed? If you have the
fd,
On 12/29/2011 06:00 PM, Avi Kivity wrote:
The NFS client has exactly the same issue, if you mount it with the intr
option. In fact you could use the NFS client as a trivial umem/cuse
prototype.
Actually, NFS can return SIGBUS, it doesn't care about restarting daemons.
--
error compiling
On Thu, Dec 29, 2011 at 01:24:32PM +0200, Avi Kivity wrote:
On 12/29/2011 03:26 AM, Isaku Yamahata wrote:
This is Linux kernel driver for qemu/kvm postcopy live migration.
This is used by qemu/kvm postcopy live migration patch.
TODO:
- Consider FUSE/CUSE option
So far several mmap
On 12/29/2011 05:53 PM, Isaku Yamahata wrote:
On Thu, Dec 29, 2011 at 04:55:11PM +0200, Avi Kivity wrote:
On 12/29/2011 04:49 PM, Isaku Yamahata wrote:
Great, then we agreed with list/reattach basically.
(Maybe identity scheme needs reconsideration.)
I guess we
On 12/29/2011 03:49 PM, Isaku Yamahata wrote:
qemu can have an extra thread that wait4()s the daemon, and relaunch
it. This extra thread would not be blocked by the page fault. It can
keep the fd so it isn't lost.
The unkillability of process A is a security issue; it could be done
On Thu, Dec 29, 2011 at 03:52:58PM +0200, Avi Kivity wrote:
On 12/29/2011 03:49 PM, Isaku Yamahata wrote:
qemu can have an extra thread that wait4()s the daemon, and relaunch
it. This extra thread would not be blocked by the page fault. It can
keep the fd so it isn't lost.
On Thu, Dec 29, 2011 at 04:35:36PM +0200, Avi Kivity wrote:
On 12/29/2011 04:18 PM, Isaku Yamahata wrote:
The issue is how to solve the page fault, not whether
TASK_INTERRUPTIBLE or
TASK_UNINTERRUPTIBLE.
I can think of several options.
- When daemon X is dead, all page
On Thu, Dec 29, 2011 at 02:55:42PM +0200, Avi Kivity wrote:
On 12/29/2011 02:39 PM, Isaku Yamahata wrote:
ioctl commands:
UMEM_DEV_CRATE_UMEM: create umem device for qemu
UMEM_DEV_LIST: list created umem devices
UMEM_DEV_REATTACH: re-attach the created umem device
On 12/29/2011 04:18 PM, Isaku Yamahata wrote:
The issue is how to solve the page fault, not whether TASK_INTERRUPTIBLE
or
TASK_UNINTERRUPTIBLE.
I can think of several options.
- When daemon X is dead, all page faults are served by zero pages.
- When daemon X is dead, all page
On 12/29/2011 03:26 AM, Isaku Yamahata wrote:
This is Linux kernel driver for qemu/kvm postcopy live migration.
This is used by qemu/kvm postcopy live migration patch.
TODO:
- Consider FUSE/CUSE option
So far several mmap patches for FUSE/CUSE are floating around. (their
purpose isn't
On Thu, Dec 29, 2011 at 10:26:16AM +0900, Isaku Yamahata wrote:
UMEM_DEV_LIST: list created umem devices
UMEM_DEV_REATTACH: re-attach the created umem device
UMEM_DEV_LIST and UMEM_DEV_REATTACH are used when
the process that services page fault disappears or
This is Linux kernel driver for qemu/kvm postcopy live migration.
This is used by qemu/kvm postcopy live migration patch.
TODO:
- Consider FUSE/CUSE option
So far several mmap patches for FUSE/CUSE are floating around. (their
purpose isn't different from our purpose, though). They haven't
29 matches
Mail list logo