Re: [Qemu-devel] 回??: [PATCH 00/21][RFC] postcopy live?migration

2012-03-12 Thread Isaku Yamahata
I fixed several issues locally, and am planning to post v2 of the patches
soon.

On Mon, Mar 12, 2012 at 04:36:53PM +0800, thfbjyddx wrote:
> Hi
> Thank you for your repley!
> I've tried with -machine accel:tcg and I got this below:
>  
> src node:
> [cid]
>  
> des node:
> [cid]
> and sometimes the last line got
> [cid]
>  
> I'm wondering that why the umemd.mig_read_fd can be read again without the des
> node's page_req
> from the last line on src node, we can see it doesn't get page req. But the 
> des
> node read the umemd.mig_read_fd again.
> I think umemd.mig_read_fd can be read only when the src node send sth to the
> socket. Is there any other situation?

This will be addressed by v2 patches. The reading part is made fully
non-blocking.

But I think, the select multiplex + non-blocking IO should be replaced
with thread + blocking IO eventually.
Given that smarter(e.g. XBRLE) page compression is coming, it won't be
practical to make those non-blocking.
Threading will cope with those patches and can take advantage of new
compression.


> BTW can you tell me how does the KVM pv clock make the patches work
> incorrectly?

PV clock touches guest pages on loading before enabling umem.
Anyway I found many other devices touch guest pages. virtio-balloon and
audio devices...
I addressed it locally by enabling umem before device loading, and it will
be included in v2.

thanks,

> Thanks
>  
> ━━━
> Tommy
>  
> From: Isaku Yamahata
> Date: 2012-01-16 18:17
> To: thfbjyddx
> CC: t.hirofuchi; qemu-devel; kvm; satoshi.itoh
> Subject: Re: [Qemu-devel]回??: [PATCH 00/21][RFC] postcopy live?migration
> On Mon, Jan 16, 2012 at 03:51:16PM +0900, Isaku Yamahata wrote:
> > Thank you for your info.
> > I suppose I found the cause, MSR_KVM_WALL_CLOCK and MSR_KVM_SYSTEM_TIME.
> > Your kernel enables KVM paravirt_ops, right?
> > 
> > Although I'm preparing the next path series including the fixes,
> > you can also try postcopy by disabling paravirt_ops or disabling kvm
> > (use tcg i.e. -machine accel:tcg).
>  
> Disabling KVM pv clock would be ok.
> Passing no-kvmclock to guest kernel disables it.
>  
> > thanks,
> > 
> > 
> > On Thu, Jan 12, 2012 at 09:26:03PM +0800, thfbjyddx wrote:
> > >  
> > > Do you know what wchan the process was blocked at?
> > > kvm_vcpu_ioctl(env, KVM_SET_MSRS, &msr_data) doesn't seem to block.
> > >  
> > > It's
> > > WCHAN  COMMAND
> > > umem_fault------qemu-system-x86
> > >  
> > >  
> > > ━
> ━━
> > > Tommy
> > >  
> > > From: Isaku Yamahata
> > > Date: 2012-01-12 16:54
> > > To: thfbjyddx
> > > CC: t.hirofuchi; qemu-devel; kvm; satoshi.itoh
> > > Subject: Re: [Qemu-devel]回??: [PATCH 00/21][RFC] postcopy live?migration
> > > On Thu, Jan 12, 2012 at 04:29:44PM +0800, thfbjyddx wrote:
> > > > Hi , I've dug more thess days
> > > >  
> > > > > (qemu) migration-tcp: Attempting to start an incoming migration
> > > > > migration-tcp: accepted migration
> > > > > 4872:4872 postcopy_incoming_ram_load:1018: incoming ram load
> > > > > 4872:4872 postcopy_incoming_ram_load:1031: addr 0x1087 flags 0x4
> > > > > 4872:4872 postcopy_incoming_ram_load:1057: done
> > > > > 4872:4872 postcopy_incoming_ram_load:1018: incoming ram load
> > > > > 4872:4872 postcopy_incoming_ram_load:1031: addr 0x0 flags 0x10
> > > > > 4872:4872 postcopy_incoming_ram_load:1037: EOS
> > > > > 4872:4872 postcopy_incoming_ram_load:1018: incoming ram load
> > > > > 4872:4872 postcopy_incoming_ram_load:1031: addr 0x0 flags 0x10
> > > > > 4872:4872 postcopy_incoming_ram_load:1037: EOS
> > > >  
> > > > There should be only single EOS line. Just copy & past miss?
> > > >  
> > > >
>  There must be two EOS for one is coming from postcopy_outgoing_ram_save_live
> > > > (...stage == QEMU_SAVE_LIVE_STAGE_PART) and the other is
> > > > postcopy_outgoing_ram_save_live(...stage == QEMU_SAVE_LIVE_STAGE_END)
> > > > I think in postcopy the ram_save_live in the iterate part can be ignore
> > > > so why there still have the qemu_put_byte(f, QEMU_VM_SECTON_PART) and
> > > > qemu_put_byte(f, QEMU_VM_SECTON_END) in the procedure? Is it essential?
> > >  
> > > Not so essential.
> > >  
> > > > Can you please track it down one more

Re: [Qemu-devel] 回??: [PATCH 00/21][RFC] postcopy live?migration

2012-01-16 Thread Isaku Yamahata
On Mon, Jan 16, 2012 at 03:51:16PM +0900, Isaku Yamahata wrote:
> Thank you for your info.
> I suppose I found the cause, MSR_KVM_WALL_CLOCK and MSR_KVM_SYSTEM_TIME.
> Your kernel enables KVM paravirt_ops, right?
> 
> Although I'm preparing the next path series including the fixes,
> you can also try postcopy by disabling paravirt_ops or disabling kvm
> (use tcg i.e. -machine accel:tcg).

Disabling KVM pv clock would be ok.
Passing no-kvmclock to guest kernel disables it.

> thanks,
> 
> 
> On Thu, Jan 12, 2012 at 09:26:03PM +0800, thfbjyddx wrote:
> >  
> > Do you know what wchan the process was blocked at?
> > kvm_vcpu_ioctl(env, KVM_SET_MSRS, &msr_data) doesn't seem to block.
> >  
> > It's
> > WCHAN  COMMAND
> > umem_fault--qemu-system-x86
> >  
> >  
> > ━━━
> > Tommy
> >  
> > From: Isaku Yamahata
> > Date: 2012-01-12 16:54
> > To: thfbjyddx
> > CC: t.hirofuchi; qemu-devel; kvm; satoshi.itoh
> > Subject: Re: [Qemu-devel]回??: [PATCH 00/21][RFC] postcopy live?migration
> > On Thu, Jan 12, 2012 at 04:29:44PM +0800, thfbjyddx wrote:
> > > Hi , I've dug more thess days
> > >  
> > > > (qemu) migration-tcp: Attempting to start an incoming migration
> > > > migration-tcp: accepted migration
> > > > 4872:4872 postcopy_incoming_ram_load:1018: incoming ram load
> > > > 4872:4872 postcopy_incoming_ram_load:1031: addr 0x1087 flags 0x4
> > > > 4872:4872 postcopy_incoming_ram_load:1057: done
> > > > 4872:4872 postcopy_incoming_ram_load:1018: incoming ram load
> > > > 4872:4872 postcopy_incoming_ram_load:1031: addr 0x0 flags 0x10
> > > > 4872:4872 postcopy_incoming_ram_load:1037: EOS
> > > > 4872:4872 postcopy_incoming_ram_load:1018: incoming ram load
> > > > 4872:4872 postcopy_incoming_ram_load:1031: addr 0x0 flags 0x10
> > > > 4872:4872 postcopy_incoming_ram_load:1037: EOS
> > >  
> > > There should be only single EOS line. Just copy & past miss?
> > >  
> > > There must be two EOS for one is coming from 
> > > postcopy_outgoing_ram_save_live
> > > (...stage == QEMU_SAVE_LIVE_STAGE_PART) and the other is
> > > postcopy_outgoing_ram_save_live(...stage == QEMU_SAVE_LIVE_STAGE_END)
> > > I think in postcopy the ram_save_live in the iterate part can be ignore
> > > so why there still have the qemu_put_byte(f, QEMU_VM_SECTON_PART) and
> > > qemu_put_byte(f, QEMU_VM_SECTON_END) in the procedure? Is it essential?
> >  
> > Not so essential.
> >  
> > > Can you please track it down one more step?
> > > Which line did it stuck in kvm_put_msrs()? kvm_put_msrs() doesn't seem to
> > > block.(backtrace by the debugger would be best.)
> > >
> > > it gets to the kvm_vcpu_ioctl(env, KVM_SET_MSRS, &msr_data) and never 
> > > return
> > > so it gets stuck
> >  
> > Do you know what wchan the process was blocked at?
> > kvm_vcpu_ioctl(env, KVM_SET_MSRS, &msr_data) doesn't seem to block.
> >  
> >  
> > > when I check the EOS problem
> > > I just annotated the qemu_put_byte(f, QEMU_VM_SECTION_PART);
> >  and qemu_put_be32
> > > (f, se->section_id)
> > >  (I think this is a wrong way to fix it and I don't know how it get 
> > > through)
> > > and leave just the se->save_live_state in the qemu_savevm_state_iterate
> > > it didn't get stuck at kvm_put_msrs()
> > > but it has some other error
> > > (qemu) migration-tcp: Attempting to start an incoming migration
> > > migration-tcp: accepted migration
> > > 2126:2126 postcopy_incoming_ram_load:1018: incoming ram load
> > > 2126:2126 postcopy_incoming_ram_load:1031: addr 0x1087 flags 0x4
> > > 2126:2126 postcopy_incoming_ram_load:1057: done
> > > migration: successfully loaded vm state
> > > 2126:2126 postcopy_incoming_fork_umemd:1069: fork
> > > 2126:2126 postcopy_incoming_fork_umemd:1127: qemu pid: 2126 daemon pid: 
> > > 2129
> > > 2130:2130 postcopy_incoming_umemd:1840: daemon pid: 2130
> > > 2130:2130 postcopy_incoming_umemd:1875: entering umemd main loop
> > > Can't find block !
> > > 2130:2130 postcopy_incoming_umem_ram_load:1526: shmem == NULL
> > > 2130:2130 postcopy_incoming_umemd:1882: exiting umemd main loop
> > > and at the same time , the destination node didn't show the EOS
> > >  
> > > so I still can't solve the stuck problem

Re: [Qemu-devel] 回??: [PATCH 00/21][RFC] postcopy live?migration

2012-01-15 Thread Isaku Yamahata
Thank you for your info.
I suppose I found the cause, MSR_KVM_WALL_CLOCK and MSR_KVM_SYSTEM_TIME.
Your kernel enables KVM paravirt_ops, right?

Although I'm preparing the next path series including the fixes,
you can also try postcopy by disabling paravirt_ops or disabling kvm
(use tcg i.e. -machine accel:tcg).

thanks,


On Thu, Jan 12, 2012 at 09:26:03PM +0800, thfbjyddx wrote:
>  
> Do you know what wchan the process was blocked at?
> kvm_vcpu_ioctl(env, KVM_SET_MSRS, &msr_data) doesn't seem to block.
>  
> It's
> WCHAN  COMMAND
> umem_fault--qemu-system-x86
>  
>  
> ━━━
> Tommy
>  
> From: Isaku Yamahata
> Date: 2012-01-12 16:54
> To: thfbjyddx
> CC: t.hirofuchi; qemu-devel; kvm; satoshi.itoh
> Subject: Re: [Qemu-devel]回??: [PATCH 00/21][RFC] postcopy live?migration
> On Thu, Jan 12, 2012 at 04:29:44PM +0800, thfbjyddx wrote:
> > Hi , I've dug more thess days
> >  
> > > (qemu) migration-tcp: Attempting to start an incoming migration
> > > migration-tcp: accepted migration
> > > 4872:4872 postcopy_incoming_ram_load:1018: incoming ram load
> > > 4872:4872 postcopy_incoming_ram_load:1031: addr 0x1087 flags 0x4
> > > 4872:4872 postcopy_incoming_ram_load:1057: done
> > > 4872:4872 postcopy_incoming_ram_load:1018: incoming ram load
> > > 4872:4872 postcopy_incoming_ram_load:1031: addr 0x0 flags 0x10
> > > 4872:4872 postcopy_incoming_ram_load:1037: EOS
> > > 4872:4872 postcopy_incoming_ram_load:1018: incoming ram load
> > > 4872:4872 postcopy_incoming_ram_load:1031: addr 0x0 flags 0x10
> > > 4872:4872 postcopy_incoming_ram_load:1037: EOS
> >  
> > There should be only single EOS line. Just copy & past miss?
> >  
> > There must be two EOS for one is coming from postcopy_outgoing_ram_save_live
> > (...stage == QEMU_SAVE_LIVE_STAGE_PART) and the other is
> > postcopy_outgoing_ram_save_live(...stage == QEMU_SAVE_LIVE_STAGE_END)
> > I think in postcopy the ram_save_live in the iterate part can be ignore
> > so why there still have the qemu_put_byte(f, QEMU_VM_SECTON_PART) and
> > qemu_put_byte(f, QEMU_VM_SECTON_END) in the procedure? Is it essential?
>  
> Not so essential.
>  
> > Can you please track it down one more step?
> > Which line did it stuck in kvm_put_msrs()? kvm_put_msrs() doesn't seem to
> > block.(backtrace by the debugger would be best.)
> >
> > it gets to the kvm_vcpu_ioctl(env, KVM_SET_MSRS, &msr_data) and never return
> > so it gets stuck
>  
> Do you know what wchan the process was blocked at?
> kvm_vcpu_ioctl(env, KVM_SET_MSRS, &msr_data) doesn't seem to block.
>  
>  
> > when I check the EOS problem
> > I just annotated the qemu_put_byte(f, QEMU_VM_SECTION_PART);
>  and qemu_put_be32
> > (f, se->section_id)
> >  (I think this is a wrong way to fix it and I don't know how it get through)
> > and leave just the se->save_live_state in the qemu_savevm_state_iterate
> > it didn't get stuck at kvm_put_msrs()
> > but it has some other error
> > (qemu) migration-tcp: Attempting to start an incoming migration
> > migration-tcp: accepted migration
> > 2126:2126 postcopy_incoming_ram_load:1018: incoming ram load
> > 2126:2126 postcopy_incoming_ram_load:1031: addr 0x1087 flags 0x4
> > 2126:2126 postcopy_incoming_ram_load:1057: done
> > migration: successfully loaded vm state
> > 2126:2126 postcopy_incoming_fork_umemd:1069: fork
> > 2126:2126 postcopy_incoming_fork_umemd:1127: qemu pid: 2126 daemon pid: 2129
> > 2130:2130 postcopy_incoming_umemd:1840: daemon pid: 2130
> > 2130:2130 postcopy_incoming_umemd:1875: entering umemd main loop
> > Can't find block !
> > 2130:2130 postcopy_incoming_umem_ram_load:1526: shmem == NULL
> > 2130:2130 postcopy_incoming_umemd:1882: exiting umemd main loop
> > and at the same time , the destination node didn't show the EOS
> >  
> > so I still can't solve the stuck problem
> > Thanks for your help~!
> > ━━
> ━
> > Tommy
> >  
> > From: Isaku Yamahata
> > Date: 2012-01-11 10:45
> > To: thfbjyddx
> > CC: t.hirofuchi; qemu-devel; kvm; satoshi.itoh
> > Subject: Re: [Qemu-devel]回??: [PATCH 00/21][RFC] postcopy live migration
> > On Sat, Jan 07, 2012 at 06:29:14PM +0800, thfbjyddx wrote:
> > > Hello all!
> >  
> > Hi, thank you for detailed report. The procedure you've tried looks
> > good basically. Some comments below.
> >  
> > > I got the qemu basic version(03ecd2c80a64d030a22

Re: [Qemu-devel] 回??: [PATCH 00/21][RFC] postcopy live?migration

2012-01-12 Thread Isaku Yamahata
On Thu, Jan 12, 2012 at 04:29:44PM +0800, thfbjyddx wrote:
> Hi , I've dug more thess days
>  
> > (qemu) migration-tcp: Attempting to start an incoming migration
> > migration-tcp: accepted migration
> > 4872:4872 postcopy_incoming_ram_load:1018: incoming ram load
> > 4872:4872 postcopy_incoming_ram_load:1031: addr 0x1087 flags 0x4
> > 4872:4872 postcopy_incoming_ram_load:1057: done
> > 4872:4872 postcopy_incoming_ram_load:1018: incoming ram load
> > 4872:4872 postcopy_incoming_ram_load:1031: addr 0x0 flags 0x10
> > 4872:4872 postcopy_incoming_ram_load:1037: EOS
> > 4872:4872 postcopy_incoming_ram_load:1018: incoming ram load
> > 4872:4872 postcopy_incoming_ram_load:1031: addr 0x0 flags 0x10
> > 4872:4872 postcopy_incoming_ram_load:1037: EOS
>  
> There should be only single EOS line. Just copy & past miss?
>  
> There must be two EOS for one is coming from postcopy_outgoing_ram_save_live
> (...stage == QEMU_SAVE_LIVE_STAGE_PART) and the other is
> postcopy_outgoing_ram_save_live(...stage == QEMU_SAVE_LIVE_STAGE_END)
> I think in postcopy the ram_save_live in the iterate part can be ignore
> so why there still have the qemu_put_byte(f, QEMU_VM_SECTON_PART) and
> qemu_put_byte(f, QEMU_VM_SECTON_END) in the procedure? Is it essential?

Not so essential.

> Can you please track it down one more step?
> Which line did it stuck in kvm_put_msrs()? kvm_put_msrs() doesn't seem to
> block.(backtrace by the debugger would be best.)
>
> it gets to the kvm_vcpu_ioctl(env, KVM_SET_MSRS, &msr_data) and never return
> so it gets stuck

Do you know what wchan the process was blocked at?
kvm_vcpu_ioctl(env, KVM_SET_MSRS, &msr_data) doesn't seem to block.


> when I check the EOS problem
> I just annotated the qemu_put_byte(f, QEMU_VM_SECTION_PART); and qemu_put_be32
> (f, se->section_id)
>  (I think this is a wrong way to fix it and I don't know how it get through)
> and leave just the se->save_live_state in the qemu_savevm_state_iterate
> it didn't get stuck at kvm_put_msrs()
> but it has some other error
> (qemu) migration-tcp: Attempting to start an incoming migration
> migration-tcp: accepted migration
> 2126:2126 postcopy_incoming_ram_load:1018: incoming ram load
> 2126:2126 postcopy_incoming_ram_load:1031: addr 0x1087 flags 0x4
> 2126:2126 postcopy_incoming_ram_load:1057: done
> migration: successfully loaded vm state
> 2126:2126 postcopy_incoming_fork_umemd:1069: fork
> 2126:2126 postcopy_incoming_fork_umemd:1127: qemu pid: 2126 daemon pid: 2129
> 2130:2130 postcopy_incoming_umemd:1840: daemon pid: 2130
> 2130:2130 postcopy_incoming_umemd:1875: entering umemd main loop
> Can't find block !
> 2130:2130 postcopy_incoming_umem_ram_load:1526: shmem == NULL
> 2130:2130 postcopy_incoming_umemd:1882: exiting umemd main loop
> and at the same time , the destination node didn't show the EOS
>  
> so I still can't solve the stuck problem
> Thanks for your help~!
> ━━━
> Tommy
>  
> From: Isaku Yamahata
> Date: 2012-01-11 10:45
> To: thfbjyddx
> CC: t.hirofuchi; qemu-devel; kvm; satoshi.itoh
> Subject: Re: [Qemu-devel]回??: [PATCH 00/21][RFC] postcopy live migration
> On Sat, Jan 07, 2012 at 06:29:14PM +0800, thfbjyddx wrote:
> > Hello all!
>  
> Hi, thank you for detailed report. The procedure you've tried looks
> good basically. Some comments below.
>  
> > I got the qemu basic version(03ecd2c80a64d030a22fe67cc7a60f24e17ff211) and
> > patched it correctly
> > but it still didn't make sense and I got the same scenario as before
> > outgoing node intel x86_64; incoming node amd x86_64. guest image is on nfs
> >  
> > I think I should show what I do more clearly and hope somebody can figure 
> > out
> > the problem
> > 
> >  ・ 1, both in/out node patch the qemu and start on 3.1.7 kernel with umem
> > 
> >./configure --target-list=
> x86_64-softmmu --enable-kvm --enable-postcopy
> > --enable-debug
> >make
> >make install
> > 
> >  ・ 2, outgoing qemu:
> > 
> > qemu-system-x86_64 -m 256 -hda xxx -monitor stdio -vnc: 2 -usbdevice tablet
> > -machine accel=kvm
> > incoming qemu:
> > qemu-system-x86_64 -m 256 -hda xxx -postcopy -incoming tcp:0: -monitor
> > stdio -vnc: 2 -usbdevice tablet -machine accel=kvm
> > 
> >  ・ 3, outgoing node:
> > 
> > migrate -d -p -n tcp:(incoming node ip):
> >  
> > result:
> > 
> >  ・ outgoing qemu:
> > 
> > info status: VM-status: paused (finish-migrate);
> > 
> >  ・ incoming qemu:
> > 
> > can't type a

Re: [Qemu-devel] 回??: [PATCH 00/21][RFC] postcopy live migration

2012-01-11 Thread Isaku Yamahata
On Sat, Jan 07, 2012 at 06:29:14PM +0800, thfbjyddx wrote:
> Hello all!

Hi, thank you for detailed report. The procedure you've tried looks
good basically. Some comments below.


> I got the qemu basic version(03ecd2c80a64d030a22fe67cc7a60f24e17ff211) and
> patched it correctly
> but it still didn't make sense and I got the same scenario as before
> outgoing node intel x86_64; incoming node amd x86_64. guest image is on nfs
>  
> I think I should show what I do more clearly and hope somebody can figure out
> the problem
> 
>  ・ 1, both in/out node patch the qemu and start on 3.1.7 kernel with umem
> 
>./configure --target-list=x86_64-softmmu --enable-kvm --enable-postcopy
> --enable-debug
>make
>make install
> 
>  ・ 2, outgoing qemu:
> 
> qemu-system-x86_64 -m 256 -hda xxx -monitor stdio -vnc: 2 -usbdevice tablet
> -machine accel=kvm
> incoming qemu:
> qemu-system-x86_64 -m 256 -hda xxx -postcopy -incoming tcp:0: -monitor
> stdio -vnc: 2 -usbdevice tablet -machine accel=kvm
> 
>  ・ 3, outgoing node:
> 
> migrate -d -p -n tcp:(incoming node ip):
>  
> result:
> 
>  ・ outgoing qemu:
> 
> info status: VM-status: paused (finish-migrate);
> 
>  ・ incoming qemu:
> 
> can't type any more and can't kill the process(qemu-system-x86)
>  
> I open the debug flag in migration.c migration-tcp.c migration-postcopy.c:
> 
>  ・ outgoing qemu:
> 
> (qemu) migration-tcp: connect completed
> migration: beginning savevm
> 4500:4500 postcopy_outgoing_ram_save_live:540: stage 1
> migration: iterate
> 4500:4500 postcopy_outgoing_ram_save_live:540: stage 2
> migration: done iterating
> 4500:4500 postcopy_outgoing_ram_save_live:540: stage 3
> 4500:4500 postcopy_outgoing_begin:716: outgoing begin
> 
>  ・ incoming qemu:
> 
> (qemu) migration-tcp: Attempting to start an incoming migration
> migration-tcp: accepted migration
> 4872:4872 postcopy_incoming_ram_load:1018: incoming ram load
> 4872:4872 postcopy_incoming_ram_load:1031: addr 0x1087 flags 0x4
> 4872:4872 postcopy_incoming_ram_load:1057: done
> 4872:4872 postcopy_incoming_ram_load:1018: incoming ram load
> 4872:4872 postcopy_incoming_ram_load:1031: addr 0x0 flags 0x10
> 4872:4872 postcopy_incoming_ram_load:1037: EOS
> 4872:4872 postcopy_incoming_ram_load:1018: incoming ram load
> 4872:4872 postcopy_incoming_ram_load:1031: addr 0x0 flags 0x10
> 4872:4872 postcopy_incoming_ram_load:1037: EOS

There should be only single EOS line. Just copy & past miss?


> from the result:
> It didn't get to the "successfully loaded vm state"
> So it still in the qemu_loadvm_state, and I found it's in
> cpu_synchronize_all_post_init->kvm_arch_put_registers->kvm_put_msrs and got
> stuck

Can you please track it down one more step?
Which line did it stuck in kvm_put_msrs()? kvm_put_msrs() doesn't seem to
block.(backtrace by the debugger would be best.)

If possible, can you please test with more simplified configuration.
i.e. drop device as much as possible i.e. no usbdevice, no disk...
So the debug will be simplified.

thanks,

> Does anyone give some advises on the problem?
> Thanks very much~
>  
> ━━━
> Tommy
>  
> From: Isaku Yamahata
> Date: 2011-12-29 09:25
> To: kvm; qemu-devel
> CC: yamahata; t.hirofuchi; satoshi.itoh
> Subject: [Qemu-devel] [PATCH 00/21][RFC] postcopy live migration
> Intro
> =
> This patch series implements postcopy live migration.[1]
> As discussed at KVM forum 2011, dedicated character device is used for
> distributed shared memory between migration source and destination.
> Now we can discuss/benchmark/compare with precopy. I believe there are
> much rooms for improvement.
>  
> [1] http://wiki.qemu.org/Features/PostCopyLiveMigration
>  
>  
> Usage
> =
> You need load umem character device on the host before starting migration.
> Postcopy can be used for tcg and kvm accelarator. The implementation depend
> on only linux umem character device. But the driver dependent code is split
> into a file.
> I tested only host page size == guest page size case, but the implementation
> allows host page size != guest page size case.
>  
> The following options are added with this patch series.
> - incoming part
>   command line options
>   -postcopy [-postcopy-flags ]
>   where flags is for changing behavior for benchmark/debugging
>   Currently the following flags are available
>   0: default
>   1: enable touching page request
>  
>   example:
>   qemu -postcopy -incoming tcp:0: -monitor stdio -machine accel=kvm
>  
> - outging part
>   options for migrate command 
>   migrate [-p [-n]] URI
>   -p: indicate postcopy migration
>   -n: disable background transferring pages: This is for benchmark/debugging
>  
>   example:
>   migrate -p -n tcp::
>  
>  
> TODO
> 
> - benchmark/evaluation. Especially how async page fault affects the result.
> - improve/optimization
>   At the moment at least what I'm aware of is
>   - touching pages in incoming qemu process by fd handler seems suboptimal