On Wed, Jul 15, 2015 at 10:18:35PM -0400, Waiman Long wrote:
> On 07/15/2015 06:03 AM, Peter Zijlstra wrote:
> >*groan*, so you complained the previous version of this patch was too
> >complex, but let me say I vastly preferred it to this one :/
>
> I said it was complex as maintaining a tri-stat
On 07/15/2015 06:03 AM, Peter Zijlstra wrote:
On Tue, Jul 14, 2015 at 10:13:36PM -0400, Waiman Long wrote:
+static void pv_kick_node(struct qspinlock *lock, struct mcs_spinlock *node)
{
struct pv_node *pn = (struct pv_node *)node;
+ if (xchg(&pn->state, vcpu_running) == vcpu_run
On Tue, Jul 14, 2015 at 10:13:36PM -0400, Waiman Long wrote:
> +static void pv_kick_node(struct qspinlock *lock, struct mcs_spinlock *node)
> {
> struct pv_node *pn = (struct pv_node *)node;
>
> + if (xchg(&pn->state, vcpu_running) == vcpu_running)
> + return;
> +
> /
On 07/15/2015 07:43 AM, Waiman Long wrote:
Performing CPU kicking at lock time can be a bit faster if there
is no kick-ahead. On the other hand, deferring it to unlock time is
preferrable when kick-ahead can be performed or when the VM guest is
having too few vCPUs that a vCPU may be kicked twice
Performing CPU kicking at lock time can be a bit faster if there
is no kick-ahead. On the other hand, deferring it to unlock time is
preferrable when kick-ahead can be performed or when the VM guest is
having too few vCPUs that a vCPU may be kicked twice before getting
the lock. This patch implemen
5 matches
Mail list logo