On Thu, Aug 18, 2016 at 1:58 PM, Nir Levy <n...@asocsnetworks.com> wrote: > I have a progress in tracing qemu, > I add the thread and tag done for each kvm_ioctl, kvm_vm_ioctl, kvm_vcpu_ioctl > in purpose of investigating pure hypervisor activity and delays on host. > the kvm type print only for convenience. > > for example: > > kvm_ioctl 3106435.230545 pid=11347 thread=11347 type=0xae03 arg=0x25 > > kvm_ioctl_done 3106435.230546 pid=11347 thread=11347 type=0xae03 arg=0x25 > diff=1 (KVM_CHECK_EXTENSION) > > kvm_vcpu_ioctl 3106435.253930 pid=11347 thread=11354 cpu_index=0x2 > type=0x4008ae9c arg=0x56417e6cb4f0 > > kvm_vcpu_ioctl_done 3106435.253931 pid=11347 thread=11354 cpu_index=0x2 > type=0x4008ae9c arg=0x56417e6cb4f0 diff=1 (KVM_X86_SETUP_MCE) > > kvm_vm_ioctl 3106435.268896 pid=11347 thread=11347 type=0x4020ae46 > arg=0x7ffed97cf9d0 > > kvm_vm_ioctl_done 3106435.269082 pid=11347 thread=11347 type=0x4020ae46 > arg=0x7ffed97cf9d0 diff=186 (KVM_SET_USER_MEMORY_REGION) > > > I have notice KVM_RUN can take even seconds but that is probably low priority > tasks,(io workers probably)
Please read Linux Documentation/virtual/kvm/api.txt to learn about the ioctl calls. KVM_RUN is *the* ioctl that executes guest code. Unless a vcpu is halted we should be inside KVM_RUN, so spending time inside this ioctl is normal. > but this 186micro second on the main qemu thread is suspicious and might > cause application running over vm delays. By "186micro second" you are referring to KVM_SET_USER_MEMORY_REGION in the trace above. Is this ioctl called in the critical path? I doubt it since the KVM_X86_SETUP_MCE ioctl in your trace happens during initialization time from kvm_arch_init_vcpu() and is not in the critical path when the guest is running. Why worry about latencies that do not affect running guests? Stefan