* Avi Kivity [2012-01-16 11:00:41]:
> Wait, what happens with yield_on_hlt=0? Will the hypercall work as
> advertised?
Hmm ..I don't think it will work when yield_on_hlt=0.
One option is to make the kick hypercall available only when
yield_on_hlt=1?
- vatsa
__
* Alexander Graf [2012-01-16 04:57:45]:
> Speaking of which - have you benchmarked performance degradation of pv ticket
> locks on bare metal?
You mean, run kernel on bare metal with CONFIG_PARAVIRT_SPINLOCKS
enabled and compare how it performs with CONFIG_PARAVIRT_SPINLOCKS disabled for
some
* Avi Kivity [2012-01-16 12:14:27]:
> > One option is to make the kick hypercall available only when
> > yield_on_hlt=1?
>
> It's not a good idea to tie various options together. Features should
> be orthogonal.
>
> Can't we make it work? Just have different handling for
> KVM_REQ_PVLOCK_KICK
* Marcelo Tosatti [2012-01-17 09:02:11]:
> > +/* Kick vcpu waiting on @lock->head to reach value @ticket */
> > +static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
> > +{
> > + int cpu;
> > + int apicid;
> > +
> > + add_stats(RELEASED_SLOW, 1);
> > +
> > + for_each
* Gleb Natapov [2012-01-17 11:14:13]:
> > The problem case I was thinking of was when guest VCPU would have issued
> > HLT with interrupts disabled. I guess one option is to inject an NMI,
> > and have the guest kernel NMI handler recognize this and make
> > adjustments such that the vcpu avoids
* Gleb Natapov [2012-01-17 14:51:26]:
> On Tue, Jan 17, 2012 at 05:56:50PM +0530, Srivatsa Vaddagiri wrote:
> > * Gleb Natapov [2012-01-17 11:14:13]:
> >
> > > > The problem case I was thinking of was when guest VCPU would have issued
> > > > HLT with int
* Gleb Natapov [2012-01-17 15:20:51]:
> > Having the hypercall makes the intent of vcpu (to sleep on a kick) clear to
> > hypervisor vs assuming that because of a trapped HLT instruction (which
> > will anyway won't work when yield_on_hlt=0).
> >
> The purpose of yield_on_hlt=0 is to allow VCPU
* Jeremy Fitzhardinge [2012-01-18 12:34:42]:
> >> What prevents a kick from being lost here, if say, the waiter is at
> >> local_irq_save in kvm_lock_spinning, before the lock/want assignments?
> > The waiter does check for lock becoming available before actually
> > sleeping:
> >
> > + /*
> >
* Marcelo Tosatti [2012-01-17 13:53:03]:
> on tue, jan 17, 2012 at 05:32:33pm +0200, gleb natapov wrote:
> > on tue, jan 17, 2012 at 07:58:18pm +0530, srivatsa vaddagiri wrote:
> > > * gleb natapov [2012-01-17 15:20:51]:
> > >
> > > > > having the hy
* Thomas Gleixner [2012-03-31 00:07:58]:
> I know that Peter is going to go berserk on me, but if we are running
> a paravirt guest then it's simple to provide a mechanism which allows
> the host (aka hypervisor) to check that in the guest just by looking
> at some global state.
>
> So if a gues
* Srivatsa Vaddagiri [2012-03-31 09:37:45]:
> The issue is with ticketlocks though. VCPUs could go into a spin w/o
> a lock being held by anybody. Say VCPUs 1-99 try to grab a lock in
> that order (on a host with one cpu). VCPU1 wins (after VCPU0 releases it)
> and releases the lo
* Ian Campbell [2012-04-16 17:36:35]:
> > > The current pv-spinlock patches however does not track which vcpu is
> > > spinning at what head of the ticketlock. I suppose we can consider
> > > that optimization in future and see how much benefit it provides (over
> > > plain yield/sleep the way i
* Raghavendra K T [2012-05-07 19:08:51]:
> I 'll get hold of a PLE mc and come up with the numbers soon. but I
> 'll expect the improvement around 1-3% as it was in last version.
Deferring preemption (when vcpu is holding lock) may give us better than 1-3%
results on PLE hardware. Something wo
* Avi Kivity [2012-05-07 16:49:25]:
> > Deferring preemption (when vcpu is holding lock) may give us better than
> > 1-3%
> > results on PLE hardware. Something worth trying IMHO.
>
> Is the improvement so low, because PLE is interfering with the patch, or
> because PLE already does a good job
On Wed, Nov 03, 2010 at 10:59:45AM -0400, Jeremy Fitzhardinge wrote:
> Make the bulk of __ticket_spin_lock look identical for large and small
> number of cpus.
[snip]
> #if (NR_CPUS < 256)
> static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
> {
> - register union {
> -
On Tue, Nov 16, 2010 at 01:08:44PM -0800, Jeremy Fitzhardinge wrote:
> From: Jeremy Fitzhardinge
>
> Maintain a flag in both LSBs of the ticket lock which indicates whether
> anyone is in the lock slowpath and may need kicking when the current
> holder unlocks. The flags are set when the first l
On Mon, Jan 17, 2011 at 08:52:22PM +0530, Srivatsa Vaddagiri wrote:
> I think this is still racy ..
>
> Unlocker Locker
>
>
> test slowpath
> -> false
>
>
On Tue, Nov 16, 2010 at 01:08:31PM -0800, Jeremy Fitzhardinge wrote:
> From: Jeremy Fitzhardinge
>
> Hi all,
>
> This is a revised version of the pvticket lock series.
The 3-patch series to follow this email extends KVM-hypervisor and Linux guest
running on KVM-hypervisor to support pv-ticket
indicated to guest via
KVM_FEATURE_WAIT_FOR_KICK/KVM_CAP_WAIT_FOR_KICK. Qemu needs a corresponding
patch to pass up the presence of this feature to guest via cpuid. Patch to qemu
will be sent separately.
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Suzuki Poulose
---
arch/x86/include/asm
Add debugfs support to print u32-arrays.
Most of this comes from Xen-hypervisor sources, which has been refactored to
make the code common for other users as well.
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Suzuki Poulose
---
arch/x86/xen/debugfs.c| 104
pv_lock_ops.
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Suzuki Poulose
---
arch/x86/Kconfig|9 +
arch/x86/include/asm/kvm_para.h |8 +
arch/x86/kernel/head64.c|3
arch/x86/kernel/kvm.c | 208
4 files
On Wed, Jan 19, 2011 at 10:42:39PM +0530, Srivatsa Vaddagiri wrote:
> Add two hypercalls to KVM hypervisor to support pv-ticketlocks.
>
> KVM_HC_WAIT_FOR_KICK blocks the calling vcpu until another vcpu kicks it or it
> is woken up because of an event like interrupt.
One possibility
On Wed, Jan 19, 2011 at 06:21:12PM +0100, Peter Zijlstra wrote:
> I didn't really read the patch, and I totally forgot everything from
> when I looked at the Xen series, but does the Xen/KVM hypercall
> interface for this include the vcpu to await the kick from?
No not yet, for reasons you mention
On Wed, Jan 19, 2011 at 10:31:06AM -0800, Jeremy Fitzhardinge wrote:
> I think you're probably right; when I last tested this code, it was
> hanging in at about the rate this kind of race would cause. And in my
> previous analysis of similar schemes (the current pv spinlock code), it
> was always
On Wed, Jan 19, 2011 at 10:55:10AM -0800, Jeremy Fitzhardinge wrote:
> On 01/19/2011 10:39 AM, Srivatsa Vaddagiri wrote:
> > I have tested quite extensively with booting a 16-vcpu guest (on a 16-pcpu
> > host)
> > and running kernel compine (with 32-threads). With
On Wed, Jan 19, 2011 at 10:53:52AM -0800, Jeremy Fitzhardinge wrote:
> > The reason for wanting this should be clear I guess, it allows PI.
>
> Well, if we can expand the spinlock to include an owner, then all this
> becomes moot...
How so? Having an owner will not eliminate the need for pv-ticke
On Wed, Jan 19, 2011 at 10:53:52AM -0800, Jeremy Fitzhardinge wrote:
> > I didn't really read the patch, and I totally forgot everything from
> > when I looked at the Xen series, but does the Xen/KVM hypercall
> > interface for this include the vcpu to await the kick from?
> >
> > My guess is not,
On Thu, Jan 20, 2011 at 02:41:46PM +0100, Peter Zijlstra wrote:
> On Thu, 2011-01-20 at 17:29 +0530, Srivatsa Vaddagiri wrote:
> >
> > If we had a yield-to [1] sort of interface _and_ information on which vcpu
> > owns a lock, then lock-spinners can yield-to the owning vcp
On Thu, Jan 20, 2011 at 09:56:27AM -0800, Jeremy Fitzhardinge wrote:
> > The key here is not to
> > sleep when waiting for locks (as implemented by current patch-series, which
> > can
> > put other VMs at an advantage by giving them more time than they are
> > entitled
> > to)
>
> Why? If a
On Fri, Jan 21, 2011 at 09:48:29AM -0500, Rik van Riel wrote:
> >>Why? If a VCPU can't make progress because its waiting for some
> >>resource, then why not schedule something else instead?
> >
> >In the process, "something else" can get more share of cpu resource than its
> >entitled to and that'
> On Mon, Jan 24, 2011 at 01:56:53PM -0800, Jeremy Fitzhardinge wrote:
For some reason, I seem to be missing emails from your id/domain and hence had
missed this completely!
> > * bits. However, we need to be careful about this because someone
> > * may just be entering as we leave, and ente
* Alexander Graf [2012-01-16 04:23:24]:
> > +5. KVM_HC_KICK_CPU
> > +
> > +value: 5
> > +Architecture: x86
> > +Purpose: Hypercall used to wakeup a vcpu from HLT state
> > +
> > +Usage example : A vcpu of a paravirtualized guest that is busywaiting in
> > guest
> > +kerne
32 matches
Mail list logo