On Fri, Jan 30, 2015 at 01:53:47PM +0530, Bharata B Rao wrote: > On Thu, Jan 29, 2015 at 12:48:39PM +1100, David Gibson wrote: > > On Thu, Jan 08, 2015 at 11:40:17AM +0530, Bharata B Rao wrote: > > > From: Gu Zheng <guz.f...@cn.fujitsu.com> > > > > This needs a commit message, it's not at all clear from the 1-line > > description. > > Borrowed patch, but I should have put in a description. > > > > diff --git a/kvm-all.c b/kvm-all.c > > > index 18cc6b4..6f543ce 100644 > > > --- a/kvm-all.c > > > +++ b/kvm-all.c > > > @@ -71,6 +71,12 @@ typedef struct KVMSlot > > > > > > typedef struct kvm_dirty_log KVMDirtyLog; > > > > > > +struct KVMParkedVcpu { > > > + unsigned long vcpu_id; > > > + int kvm_fd; > > > + QLIST_ENTRY(KVMParkedVcpu) node; > > > +}; > > > + > > > struct KVMState > > > { > > > AccelState parent_obj; > > > @@ -107,6 +113,7 @@ struct KVMState > > > QTAILQ_HEAD(msi_hashtab, KVMMSIRoute) > > > msi_hashtab[KVM_MSI_HASHTAB_SIZE]; > > > bool direct_msi; > > > #endif > > > + QLIST_HEAD(, KVMParkedVcpu) kvm_parked_vcpus; > > > }; > > > > > > #define TYPE_KVM_ACCEL ACCEL_CLASS_NAME("kvm") > > > @@ -247,6 +254,53 @@ static int kvm_set_user_memory_region(KVMState *s, > > > KVMSlot *slot) > > > return kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem); > > > } > > > > > > +int kvm_destroy_vcpu(CPUState *cpu) > > > +{ > > > + KVMState *s = kvm_state; > > > + long mmap_size; > > > + struct KVMParkedVcpu *vcpu = NULL; > > > + int ret = 0; > > > + > > > + DPRINTF("kvm_destroy_vcpu\n"); > > > + > > > + mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0); > > > + if (mmap_size < 0) { > > > + ret = mmap_size; > > > + DPRINTF("kvm_destroy_vcpu failed\n"); > > > + goto err; > > > + } > > > + > > > + ret = munmap(cpu->kvm_run, mmap_size); > > > + if (ret < 0) { > > > + goto err; > > > + } > > > + > > > + vcpu = g_malloc0(sizeof(*vcpu)); > > > + vcpu->vcpu_id = kvm_arch_vcpu_id(cpu); > > > + vcpu->kvm_fd = cpu->kvm_fd; > > > + QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node); > > > > What's the reason for parking vcpus rather than removing / recreating > > them at the kvm level? > > Since KVM isn't equipped to handle closure of vcpu fd from userspace(QEMU) > correctly, certain work arounds have to be employed to allow reuse of > vcpu array slot in KVM during cpu hot plug/unplug from guest. One such > proposed workaround is to park the vcpu fd in userspace during cpu unplug > and reuse it later during next hotplug. > > Some details can be found here: > KVM: https://www.mail-archive.com/kvm@vger.kernel.org/msg102839.html > QEMU: http://lists.gnu.org/archive/html/qemu-devel/2014-12/msg00859.html
Ok, that makes sense but it definitely needs comment, both in the code and in the commit message. -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson
pgpvhbIplswqJ.pgp
Description: PGP signature