Hi, For vhost, *last_avail_idx* is maintained in vhost_virtqueue but during live migration, *last_avail_idx* is fetched from VirtQueue. Do you know how these two *last_avail_idx *are synchronized?
virtio_load related code which is called during live migration: * vdev->vq[i].inuse = (uint16_t)(vdev->vq[i].last_avail_idx -* * vdev->vq[i].used_idx);* * if (vdev->vq[i].inuse > vdev->vq[i].vring.num) {* * error_report("VQ %d size 0x%x < last_avail_idx 0x%x - "* * "used_idx 0x%x",* * i, vdev->vq[i].vring.num,* * vdev->vq[i].last_avail_idx,* Thanks On Tue, 23 Apr 2019 at 14:20, fengyd <fengy...@gmail.com> wrote: > Hi, > > I want to add some log to qemu-kvm-ev. > Do you know how to compile qemu-kvm-ev from source code? > > Thanks > > Yafeng > > On Tue, 16 Apr 2019 at 16:47, Dr. David Alan Gilbert <dgilb...@redhat.com> > wrote: > >> * fengyd (fengy...@gmail.com) wrote: >> > ---------- Forwarded message --------- >> > From: fengyd <fengy...@gmail.com> >> > Date: Tue, 16 Apr 2019 at 09:17 >> > Subject: Re: [Qemu-devel] How live migration work for vhost-user >> > To: Dr. David Alan Gilbert <dgilb...@redhat.com> >> > >> > >> > Hi, >> > >> > Any special feature needs to be supported on guest driver? >> > Because it's OK for standard Linux VM, but not OK for our VM where >> virtio >> > is implemented by ourself. >> >> I'm not sure; you do have to support that 'log' mechanism but I don't >> know what else is needed. >> >> > And with qemu-kvm-ev-2.6, live migration can work with our VM where >> virtio >> > is implemented by ourself. >> >> 2.6 is pretty old, so there's a lot of changes - not sure what's >> relevant. >> >> Dave >> >> > Thanks >> > Yafeng >> > >> > On Mon, 15 Apr 2019 at 22:54, Dr. David Alan Gilbert < >> dgilb...@redhat.com> >> > wrote: >> > >> > > * fengyd (fengy...@gmail.com) wrote: >> > > > Hi, >> > > > >> > > > During live migration, the folloing log can see in >> nova-compute.log in >> > > my >> > > > environment: >> > > > ERROR nova.virt.libvirt.driver >> [req-039a85e1-e7a1-4a63-bc6d-c4b9a044aab6 >> > > > 0cdab20dc79f4bc6ae5790e7b4a898ac 3363c319773549178acc67f32c78310e - >> > > default >> > > > default] [instance: 5ec719f4-1865-4afe-a207-3d9fae22c410] Live >> Migration >> > > > failure: internal error: qemu unexpectedly closed the monitor: >> > > > 2019-04-15T02:58:22.213897Z qemu-kvm: VQ 0 >> > > > size 0x100 < last_avail_idx 0x1e - used_idx 0x23 >> > > > >> > > > It's OK for standard Linux VM, but not OK for our VM where virtio is >> > > > implemented by ourself. >> > > > KVM version as follow: >> > > > qemu-kvm-common-ev-2.12.0-18.el7_6.3.1.x86_64 >> > > > qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64 >> > > > libvirt-daemon-kvm-3.9.0-14.2.el7.centos.ncir.8.x86_64 >> > > > >> > > > Do you know what's the difference between virtio and vhost-user >> during >> > > > migration? >> > > > The function virtio_load in Qemu is called for virtio and vhost-user >> > > during >> > > > migration. >> > > > For virtio, last_avail_idx and used_idx are stored in Qemu, Qemu >> is >> > > > responsible for updating their values accordingly >> > > > For vhost-user, last_avail_idx and used_idx are stored in >> vhost-user >> > > app, >> > > > eg. DPDK, not in Qemu? >> > > > How does migration work for vhost-user? >> > > >> > > I don't know the details, but my understanding is that vhost-user >> > > tells the vhost-user client about an area of 'log' memory, where the >> > > vhost-user client must mark pages as dirty. >> > > >> > > In the qemu source, see docs/interop/vhost-user.txt and see >> > > the VHOST_SET_LOG_BASE and VHOST_USER_SET_LOG_FD calls. >> > > >> > > If the client correctly marks the areas as dirty, then qemu >> > > should resend those pages across. >> > > >> > > >> > > Dave >> > > >> > > > Thanks in advance >> > > > Yafeng >> > > -- >> > > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK >> > > >> -- >> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK >> >