On Sat, May 27, 2023 at 05:07:34AM +0000, br...@mailbox.org wrote:
>
> On Sat, 27 May 2023, Mike Larkin wrote:
>
> > On Fri, May 26, 2023 at 08:14:23PM +0200, br...@mailbox.org wrote:
> > > On 05/26/2023 8:08 PM CEST Mike Larkin <mlar...@nested.page> wrote:
> > >
> > >
> > > > On Fri, May 26, 2023 at 07:16:09PM +0200, br...@mailbox.org wrote:
> > > > > > On 05/26/2023 6:06 PM CEST Mike Larkin <mlar...@nested.page> wrote:
> > > > > >
> > > > > > perf top on the linux side to see where qemu is spending it's time?
> > > > >
> > > > > Sure, I ran `perf top -p $PID` with $PID being the PID of the QEMU 
> > > > > process and copied the screen after a few seconds. Let me know if you 
> > > > > intended something different:
> > > > >
> > > > >  PerfTop:     133 irqs/sec  kernel:72.9%  exact:  0.0% lost: 0/0 
> > > > > drop: 0/0 [4000Hz cycles],  (target_pid: 9939)
> > > > > --------------------------------------------------------------------------------------------------------------------------------------------------
> > > > >
> > > > >     25.35%  [kvm_amd]      [k] svm_vcpu_run
> > > > >      4.18%  [kernel]       [k] native_write_msr
> > > > >      4.01%  [kernel]       [k] native_read_msr
> > > > >      3.73%  [kernel]       [k] read_tsc
> > > > >      3.64%  [kvm]          [k] kvm_arch_vcpu_ioctl_run
> > > > >      2.21%  [kvm_amd]      [k] svm_vcpu_load
> > > > >      1.98%  [kernel]       [k] ktime_get
> > > > >      1.47%  [kvm]          [k] kvm_apic_has_interrupt
> > > > >      1.40%  [kernel]       [k] restore_fpregs_from_fpstate
> > > > >      1.29%  [kvm]          [k] apic_has_interrupt_for_ppr
> > > > >      1.18%  [kernel]       [k] check_preemption_disabled
> > > > >      1.10%  [kernel]       [k] x86_pmu_disable_all
> > > > >      1.07%  [kernel]       [k] __srcu_read_lock
> > > > >      1.07%  [kernel]       [k] newidle_balance
> > > > >      1.03%  [kvm]          [k] kvm_pmu_trigger_event
> > > > >      0.98%  [kernel]       [k] amd_pmu_addr_offset
> > > > >
> > > > > I tried this also on the FreeBSD VM and the irqs/sec were between 2 
> > > > > and 4.
> > > > >
> > > >
> > > > you might just be bombarded with ipis. how many vcpus?
> > >
> > > It should be 16, I use `-smp 16 -cpu host`
> > >
> >
> > try with less and see if that works.
>
> With -smp 4 it's better although still worse than FreeBSD/Linux. In htop
> OpenBSD is in the 1.3-2.6% range while the other OSes are at 0-0.7%. As I
> said, the FreeBSD/Linux VMs continue to stay in the 0% even at -smp 16,
> while I've seen OpenBSD idle at up to 7-8% with that.

probably IPI traffic then. not sure what else to say. If a few % host overhead
is too much fot you with a 16 vCPU VM, I'd suggest reducing that.

What is your workload for a 16 vcpu openbsd VM anyway?

Reply via email to