On Mon, Jul 07, 2014 at 05:27:34PM +0200, Peter Zijlstra wrote:
> On Fri, Jun 20, 2014 at 09:46:08AM -0400, Konrad Rzeszutek Wilk wrote:
> > I dug in the code and I have some comments about it, but before
> > I post them I was wondering if you have any plans to run any performance
> > tests against
On Fri, Jun 20, 2014 at 09:46:08AM -0400, Konrad Rzeszutek Wilk wrote:
> I dug in the code and I have some comments about it, but before
> I post them I was wondering if you have any plans to run any performance
> tests against the PV ticketlock with normal and over-committed scenarios?
I can bare
On Wed, Jun 18, 2014 at 02:03:12PM +0200, Paolo Bonzini wrote:
> Il 17/06/2014 00:08, Waiman Long ha scritto:
> >>+void __pv_queue_unlock(struct qspinlock *lock)
> >>+{
> >>+ int val = atomic_read(&lock->val);
> >>+
> >>+ native_queue_unlock(lock);
> >>+
> >>+ if (val & _Q_LOCKED_SLOW)
> >>+
On Mon, Jun 16, 2014 at 06:08:21PM -0400, Waiman Long wrote:
> On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
> >+struct pv_node {
> >+struct mcs_spinlock mcs;
> >+struct mcs_spinlock __offset[3];
> >+int cpu, head;
> >+};
>
> I am wondering why you need the separate cpu and hea
On Sun, Jun 15, 2014 at 02:47:07PM +0200, Peter Zijlstra wrote:
> Add minimal paravirt support.
>
> The code aims for minimal impact on the native case.
Woot!
>
> On the lock side we add one jump label (asm_goto) and 4 paravirt
> callee saved calls that default to NOPs. The only effects are the
On 06/18/2014 08:03 AM, Paolo Bonzini wrote:
Il 17/06/2014 00:08, Waiman Long ha scritto:
+void __pv_queue_unlock(struct qspinlock *lock)
+{
+int val = atomic_read(&lock->val);
+
+native_queue_unlock(lock);
+
+if (val & _Q_LOCKED_SLOW)
+___pv_kick_head(lock);
+}
+
Again a r
Il 15/06/2014 14:47, Peter Zijlstra ha scritto:
#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
-#definequeue_spin_unlock queue_spin_unlock
/**
* queue_spin_unlock - release a queue spinlock
* @lock : Pointer to queue spinlock structure
*
* An effective sm
Il 17/06/2014 00:08, Waiman Long ha scritto:
+void __pv_queue_unlock(struct qspinlock *lock)
+{
+ int val = atomic_read(&lock->val);
+
+ native_queue_unlock(lock);
+
+ if (val & _Q_LOCKED_SLOW)
+ ___pv_kick_head(lock);
+}
+
Again a race can happen here between th
I am resending it as my original reply has some HTML code & hence
rejected by the mailing lists.
On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+/*
+ * Write a comment about how all this works...
+ */
+
+#define _Q_LOCKED_SLOW (2U<< _Q_LOCKED_OFFSET)
+
+s
Add minimal paravirt support.
The code aims for minimal impact on the native case.
On the lock side we add one jump label (asm_goto) and 4 paravirt
callee saved calls that default to NOPs. The only effects are the
extra NOPs and some pointless MOVs to accomodate the calling
convention. No registe
10 matches
Mail list logo