Re: [libvirt] [PATCH 0/7] x86: Rework KVM-defaults compat code, enable kvm_pv_unhalt by default

2017-11-08 Thread Eduardo Habkost
On Tue, Nov 07, 2017 at 12:21:16PM +0100, Paolo Bonzini wrote:
> On 14/10/2017 01:56, Eduardo Habkost wrote:
> > Now, I don't know yet what's the best default for a guest that
> > has CONFIG_PARAVIRT_SPINLOCK when it sees a host that supports
> > kvm_pv_unhalt.  But I'm arguing that it's the guest
> > responsibility to choose what to do when it detects such a host,
> > instead of expecting the host to hide features from the guest.
> > The guest and the guest administrator have more information to
> > choose what's best.
> > 
> > In other words, if exposing kvm_pv_unhalt on CPUID really makes
> > some guests behave poorly, can we fix the guests instead?
> 
> Waiman just did it. :)
> 
> https://marc.info/?l=linux-kernel=150972337909996

Thanks for the pointer.  I will resubmit the series for 2.12
after 2.11 is released.

-- 
Eduardo

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/7] x86: Rework KVM-defaults compat code, enable kvm_pv_unhalt by default

2017-11-07 Thread Paolo Bonzini
On 14/10/2017 01:56, Eduardo Habkost wrote:
> Now, I don't know yet what's the best default for a guest that
> has CONFIG_PARAVIRT_SPINLOCK when it sees a host that supports
> kvm_pv_unhalt.  But I'm arguing that it's the guest
> responsibility to choose what to do when it detects such a host,
> instead of expecting the host to hide features from the guest.
> The guest and the guest administrator have more information to
> choose what's best.
> 
> In other words, if exposing kvm_pv_unhalt on CPUID really makes
> some guests behave poorly, can we fix the guests instead?

Waiman just did it. :)

https://marc.info/?l=linux-kernel=150972337909996

Paolo

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/7] x86: Rework KVM-defaults compat code, enable kvm_pv_unhalt by default

2017-10-13 Thread Eduardo Habkost
On Fri, Oct 13, 2017 at 04:58:23PM -0400, Waiman Long wrote:
> On 10/13/2017 03:01 PM, Eduardo Habkost wrote:
> > On Wed, Oct 11, 2017 at 04:19:38PM -0400, Waiman Long wrote:
> >> On 10/10/2017 03:41 PM, Eduardo Habkost wrote:
> >>> On Tue, Oct 10, 2017 at 02:07:25PM -0400, Waiman Long wrote:
>  On 10/10/2017 11:50 AM, Eduardo Habkost wrote:
> >> Yes.  Another possibility is to enable it when there is >1 NUMA node in
> >> the guest.  We generally don't do this kind of magic but higher layers
> >> (oVirt/OpenStack) do.
> > Can't the guest make this decision, instead of the host?
>  By guest, do you mean the guest OS itself or the admin of the guest VM?
> >>> It could be either.  But even if action is required from the
> >>> guest admin to get better performance in some cases, I'd argue
> >>> that the default behavior of a Linux guest shouldn't cause a
> >>> performance regression if the host stops hiding a feature in
> >>> CPUID.
> >>>
>  I am thinking about maybe adding kernel boot command line option like
>  "unfair_pvspinlock_cpu_threshold=4" which will instruct the OS to use
>  unfair spinlock if the number of CPUs is 4 or less, for example. The
>  default value of 0 will have the same behavior as it is today. Please
>  let me know what you guys think about that.
> >>> If that's implemented, can't Linux choose a reasonable default
> >>> for unfair_pvspinlock_cpu_threshold that won't require the admin
> >>> to manually configure it on most cases?
> >> It is hard to have a fixed value as it depends on the CPUs being used as
> >> well as the kind of workloads that are being run. Besides, using unfair
> >> locks have the undesirable side effect of being subject to lock
> >> starvation under certain circumstances. So we may not work it to be
> >> turned on by default. Customers have to take their own risk if they want
> >> that.
> > Probably I am not seeing all variables involved, so pardon my
> > confusion.  Would unfair_pvspinlock_cpu_threshold > num_cpus just
> > disable usage of kvm_pv_unhalt, or make the guest choose a
> > completely different spinlock implementation?
> 
> What I am proposing is that if num_cpus <=
> unfair_pvspinlock_cpu_threshold, the unfair spinlock will be used even
> if kvm_pv_unhalt is set.
> 
> > Is the current default behavior of Linux guests when
> > kvm_pv_unhalt is unavailable a good default?  If using
> > kvm_pv_unhalt is not always a good idea, why do Linux guests
> > default to eagerly trying to use it only because the host says
> > it's available?
> 
> For kernel with CONFIG_PARVIRT_SPINLOCKS, the current default is to use
> pvqspinlock if kvm_pv_unhalt is enabled, but use unfair spinlock if it
> is disabled. For kernel with just CONFIG_PARVIRT but no
> CONFIG_PARAVIRT_SPINLOCKS, the unfair lock will be use no matter the
> setting of kvm_pv_unhalt. Without those config options, the standard
> qspinlock will be used.

Thanks for the explanation.

Now, I don't know yet what's the best default for a guest that
has CONFIG_PARAVIRT_SPINLOCK when it sees a host that supports
kvm_pv_unhalt.  But I'm arguing that it's the guest
responsibility to choose what to do when it detects such a host,
instead of expecting the host to hide features from the guest.
The guest and the guest administrator have more information to
choose what's best.

In other words, if exposing kvm_pv_unhalt on CPUID really makes
some guests behave poorly, can we fix the guests instead?

-- 
Eduardo

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/7] x86: Rework KVM-defaults compat code, enable kvm_pv_unhalt by default

2017-10-13 Thread Waiman Long
On 10/13/2017 03:01 PM, Eduardo Habkost wrote:
> On Wed, Oct 11, 2017 at 04:19:38PM -0400, Waiman Long wrote:
>> On 10/10/2017 03:41 PM, Eduardo Habkost wrote:
>>> On Tue, Oct 10, 2017 at 02:07:25PM -0400, Waiman Long wrote:
 On 10/10/2017 11:50 AM, Eduardo Habkost wrote:
>> Yes.  Another possibility is to enable it when there is >1 NUMA node in
>> the guest.  We generally don't do this kind of magic but higher layers
>> (oVirt/OpenStack) do.
> Can't the guest make this decision, instead of the host?
 By guest, do you mean the guest OS itself or the admin of the guest VM?
>>> It could be either.  But even if action is required from the
>>> guest admin to get better performance in some cases, I'd argue
>>> that the default behavior of a Linux guest shouldn't cause a
>>> performance regression if the host stops hiding a feature in
>>> CPUID.
>>>
 I am thinking about maybe adding kernel boot command line option like
 "unfair_pvspinlock_cpu_threshold=4" which will instruct the OS to use
 unfair spinlock if the number of CPUs is 4 or less, for example. The
 default value of 0 will have the same behavior as it is today. Please
 let me know what you guys think about that.
>>> If that's implemented, can't Linux choose a reasonable default
>>> for unfair_pvspinlock_cpu_threshold that won't require the admin
>>> to manually configure it on most cases?
>> It is hard to have a fixed value as it depends on the CPUs being used as
>> well as the kind of workloads that are being run. Besides, using unfair
>> locks have the undesirable side effect of being subject to lock
>> starvation under certain circumstances. So we may not work it to be
>> turned on by default. Customers have to take their own risk if they want
>> that.
> Probably I am not seeing all variables involved, so pardon my
> confusion.  Would unfair_pvspinlock_cpu_threshold > num_cpus just
> disable usage of kvm_pv_unhalt, or make the guest choose a
> completely different spinlock implementation?

What I am proposing is that if num_cpus <=
unfair_pvspinlock_cpu_threshold, the unfair spinlock will be used even
if kvm_pv_unhalt is set.

> Is the current default behavior of Linux guests when
> kvm_pv_unhalt is unavailable a good default?  If using
> kvm_pv_unhalt is not always a good idea, why do Linux guests
> default to eagerly trying to use it only because the host says
> it's available?

For kernel with CONFIG_PARVIRT_SPINLOCKS, the current default is to use
pvqspinlock if kvm_pv_unhalt is enabled, but use unfair spinlock if it
is disabled. For kernel with just CONFIG_PARVIRT but no
CONFIG_PARAVIRT_SPINLOCKS, the unfair lock will be use no matter the
setting of kvm_pv_unhalt. Without those config options, the standard
qspinlock will be used.

Cheers,
Longman




--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/7] x86: Rework KVM-defaults compat code, enable kvm_pv_unhalt by default

2017-10-13 Thread Eduardo Habkost
On Wed, Oct 11, 2017 at 04:19:38PM -0400, Waiman Long wrote:
> On 10/10/2017 03:41 PM, Eduardo Habkost wrote:
> > On Tue, Oct 10, 2017 at 02:07:25PM -0400, Waiman Long wrote:
> >> On 10/10/2017 11:50 AM, Eduardo Habkost wrote:
>  Yes.  Another possibility is to enable it when there is >1 NUMA node in
>  the guest.  We generally don't do this kind of magic but higher layers
>  (oVirt/OpenStack) do.
> >>> Can't the guest make this decision, instead of the host?
> >> By guest, do you mean the guest OS itself or the admin of the guest VM?
> > It could be either.  But even if action is required from the
> > guest admin to get better performance in some cases, I'd argue
> > that the default behavior of a Linux guest shouldn't cause a
> > performance regression if the host stops hiding a feature in
> > CPUID.
> >
> >> I am thinking about maybe adding kernel boot command line option like
> >> "unfair_pvspinlock_cpu_threshold=4" which will instruct the OS to use
> >> unfair spinlock if the number of CPUs is 4 or less, for example. The
> >> default value of 0 will have the same behavior as it is today. Please
> >> let me know what you guys think about that.
> > If that's implemented, can't Linux choose a reasonable default
> > for unfair_pvspinlock_cpu_threshold that won't require the admin
> > to manually configure it on most cases?
> 
> It is hard to have a fixed value as it depends on the CPUs being used as
> well as the kind of workloads that are being run. Besides, using unfair
> locks have the undesirable side effect of being subject to lock
> starvation under certain circumstances. So we may not work it to be
> turned on by default. Customers have to take their own risk if they want
> that.

Probably I am not seeing all variables involved, so pardon my
confusion.  Would unfair_pvspinlock_cpu_threshold > num_cpus just
disable usage of kvm_pv_unhalt, or make the guest choose a
completely different spinlock implementation?

Is the current default behavior of Linux guests when
kvm_pv_unhalt is unavailable a good default?  If using
kvm_pv_unhalt is not always a good idea, why do Linux guests
default to eagerly trying to use it only because the host says
it's available?

-- 
Eduardo

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/7] x86: Rework KVM-defaults compat code, enable kvm_pv_unhalt by default

2017-10-11 Thread Waiman Long
On 10/10/2017 03:41 PM, Eduardo Habkost wrote:
> On Tue, Oct 10, 2017 at 02:07:25PM -0400, Waiman Long wrote:
>> On 10/10/2017 11:50 AM, Eduardo Habkost wrote:
 Yes.  Another possibility is to enable it when there is >1 NUMA node in
 the guest.  We generally don't do this kind of magic but higher layers
 (oVirt/OpenStack) do.
>>> Can't the guest make this decision, instead of the host?
>> By guest, do you mean the guest OS itself or the admin of the guest VM?
> It could be either.  But even if action is required from the
> guest admin to get better performance in some cases, I'd argue
> that the default behavior of a Linux guest shouldn't cause a
> performance regression if the host stops hiding a feature in
> CPUID.
>
>> I am thinking about maybe adding kernel boot command line option like
>> "unfair_pvspinlock_cpu_threshold=4" which will instruct the OS to use
>> unfair spinlock if the number of CPUs is 4 or less, for example. The
>> default value of 0 will have the same behavior as it is today. Please
>> let me know what you guys think about that.
> If that's implemented, can't Linux choose a reasonable default
> for unfair_pvspinlock_cpu_threshold that won't require the admin
> to manually configure it on most cases?

It is hard to have a fixed value as it depends on the CPUs being used as
well as the kind of workloads that are being run. Besides, using unfair
locks have the undesirable side effect of being subject to lock
starvation under certain circumstances. So we may not work it to be
turned on by default. Customers have to take their own risk if they want
that.

Regards,
Longman

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/7] x86: Rework KVM-defaults compat code, enable kvm_pv_unhalt by default

2017-10-10 Thread Eduardo Habkost
On Tue, Oct 10, 2017 at 02:07:25PM -0400, Waiman Long wrote:
> On 10/10/2017 11:50 AM, Eduardo Habkost wrote:
> >> Yes.  Another possibility is to enable it when there is >1 NUMA node in
> >> the guest.  We generally don't do this kind of magic but higher layers
> >> (oVirt/OpenStack) do.
> > Can't the guest make this decision, instead of the host?
> 
> By guest, do you mean the guest OS itself or the admin of the guest VM?

It could be either.  But even if action is required from the
guest admin to get better performance in some cases, I'd argue
that the default behavior of a Linux guest shouldn't cause a
performance regression if the host stops hiding a feature in
CPUID.

> 
> I am thinking about maybe adding kernel boot command line option like
> "unfair_pvspinlock_cpu_threshold=4" which will instruct the OS to use
> unfair spinlock if the number of CPUs is 4 or less, for example. The
> default value of 0 will have the same behavior as it is today. Please
> let me know what you guys think about that.

If that's implemented, can't Linux choose a reasonable default
for unfair_pvspinlock_cpu_threshold that won't require the admin
to manually configure it on most cases?

-- 
Eduardo

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/7] x86: Rework KVM-defaults compat code, enable kvm_pv_unhalt by default

2017-10-10 Thread Waiman Long
On 10/10/2017 11:50 AM, Eduardo Habkost wrote:
>> Yes.  Another possibility is to enable it when there is >1 NUMA node in
>> the guest.  We generally don't do this kind of magic but higher layers
>> (oVirt/OpenStack) do.
> Can't the guest make this decision, instead of the host?

By guest, do you mean the guest OS itself or the admin of the guest VM?

I am thinking about maybe adding kernel boot command line option like
"unfair_pvspinlock_cpu_threshold=4" which will instruct the OS to use
unfair spinlock if the number of CPUs is 4 or less, for example. The
default value of 0 will have the same behavior as it is today. Please
let me know what you guys think about that.

Cheers,
Longman
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 0/7] x86: Rework KVM-defaults compat code, enable kvm_pv_unhalt by default

2017-10-10 Thread Eduardo Habkost
(CCing libvir-list)

On Mon, Oct 09, 2017 at 05:47:44PM +0200, Paolo Bonzini wrote:
> On 09/10/2017 17:15, Waiman Long wrote:
> > On 10/09/2017 09:39 AM, Paolo Bonzini wrote:
> >> On 06/10/2017 23:52, Eduardo Habkost wrote:
> >>> This series enables kvm_pv_unhalt by default on pc-*-2.11 and
> >>> newer.
> >>
> >> I've discussed PV spinlocks with some folks at Microsoft for a few weeks
> >> now, and I'm not 100% sure about enabling kvm_pv_unhalt by default.
> >> It's probably a good idea overall, but it does come with some caveats.
> >>
> >> The problem is that there were two different implementations of fair
> >> spinlocks in Linux, ticket spinlocks and queued spinlocks.  When
> >> kvm_pv_unhalt is disabled, ticket spinlocks sucked badly indeed; queued
> >> spinlocks however simply revert to unfair spinlocks, which loses the
> >> fairness but has the best performance.  See virt_spin_lock in
> >> arch/x86/include/asm/qspinlock.h.
> >>
> >> Now, fair spinlocks are only really needed for large NUMA machines.
> >> With a single NUMA node, for example, test-and-set spinlocks work well
> >> enough; there's not _much_ need for fairness in practice, and the
> >> paravirtualization does introduce some overhead.
> >>
> >> Therefore, the best performance would be achieved with kvm_pv_unhalt
> >> disabled on small VMs, and enabled on large VMs spanning multiple host
> >> NUMA nodes.
> >>
> >> Waiman, Davidlohr, do you have an opinion on this as well?
> > 
> > I agree. Using unfair lock in a small VM is good for performance. The
> > only problem I see is how do we define what is small. Now, even a
> > machine with a single socket can have up to 28 cores, 56 threads. If a
> > VM has up to 56 vCPUs, we may still want pvqspinlock to be used.
> 
> Yes.  Another possibility is to enable it when there is >1 NUMA node in
> the guest.  We generally don't do this kind of magic but higher layers
> (oVirt/OpenStack) do.

Can't the guest make this decision, instead of the host?

> 
> It also depends on how worse performance is with PV qspinlocks,
> especially since now there's also vcpu_is_preempted that has to be
> thrown in the mix.  A lot more testing is needed.
> 

This is an interesting case for "-cpu host"/host-passthrough:
usage of "-cpu host" has an implicit assumption that no feature
will decrease performance just because we told the guest that it
is available.

Can't the guest choose the most reasonable option based on the
information we provide to it, instead of having to hide
information from the guest to get better performance?


> > Is the kvm_pv_unhalt flag a global one for all VMs within a machine? Or
> > can it be different for each VM? We may want to have this flag
> > dynamically determined depending on the configuration of the VM.
> 
> It's per-VM, so that's easy.

-- 
Eduardo

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list