On 6/12/19 12:38 AM, Alex Kogan wrote:
> Hi, Wei.
>
>> On Jun 11, 2019, at 12:22 AM, liwei (GF) wrote:
>>
>> Hi Alex,
>>
>> On 2019/3/29 23:20, Alex Kogan wrote:
>>> In CNA, spinning threads are organized in two queues, a main queue for
>>> threads running on the same node as the current lock
Hi, Wei.
> On Jun 11, 2019, at 12:22 AM, liwei (GF) wrote:
>
> Hi Alex,
>
> On 2019/3/29 23:20, Alex Kogan wrote:
>> In CNA, spinning threads are organized in two queues, a main queue for
>> threads running on the same node as the current lock holder, and a
>> secondary queue for threads
Hi Alex,
On 2019/3/29 23:20, Alex Kogan wrote:
> In CNA, spinning threads are organized in two queues, a main queue for
> threads running on the same node as the current lock holder, and a
> secondary queue for threads running on other nodes. At the unlock time,
> the lock holder scans the main
On 6/6/19 11:32 AM, Waiman Long wrote:
> On 6/6/19 11:21 AM, Alex Kogan wrote:
Also, the paravirt code is under arch/x86, while CNA is generic (not
x86-specific). Do you still want to see CNA-related patching residing
under arch/x86?
We still need a config option
On 6/6/19 11:21 AM, Alex Kogan wrote:
>>> Also, the paravirt code is under arch/x86, while CNA is generic (not
>>> x86-specific). Do you still want to see CNA-related patching residing
>>> under arch/x86?
>>>
>>> We still need a config option (something like NUMA_AWARE_SPINLOCKS) to
>>> enable
>> Also, the paravirt code is under arch/x86, while CNA is generic (not
>> x86-specific). Do you still want to see CNA-related patching residing
>> under arch/x86?
>>
>> We still need a config option (something like NUMA_AWARE_SPINLOCKS) to
>> enable CNA patching under this config only, correct?
On Tue, Jun 04, 2019 at 07:21:13PM -0400, Alex Kogan wrote:
> Trying to resume this work, I am looking for concrete steps required
> to integrate CNA with the paravirt patching.
>
> Looking at alternative_instructions(), I wonder if I need to add
> another call, something like apply_numa()
Hi, Peter, Longman,
> On Apr 3, 2019, at 12:01 PM, Peter Zijlstra wrote:
>
> On Wed, Apr 03, 2019 at 11:39:09AM -0400, Alex Kogan wrote:
>
The patch that I am looking for is to have a separate
numa_queued_spinlock_slowpath() that coexists with
native_queued_spinlock_slowpath()
On 04/04/2019 05:38 AM, Peter Zijlstra wrote:
> On Thu, Apr 04, 2019 at 07:05:24AM +0200, Juergen Gross wrote:
>
>> Without PARAVIRT_SPINLOCK this would be just an alternative() then?
> That could maybe work yes. This is all early enough.
Yes, alternative() should work as it is done before SMP
On Thu, Apr 04, 2019 at 07:05:24AM +0200, Juergen Gross wrote:
> Without PARAVIRT_SPINLOCK this would be just an alternative() then?
That could maybe work yes. This is all early enough.
> Maybe the resulting code would be much more readable if we'd just
> make PARAVIRT_SPINLOCK usable without
On 03/04/2019 18:01, Peter Zijlstra wrote:
> On Wed, Apr 03, 2019 at 11:39:09AM -0400, Alex Kogan wrote:
>
The patch that I am looking for is to have a separate
numa_queued_spinlock_slowpath() that coexists with
native_queued_spinlock_slowpath() and
Hi, Hanjun.
> On Apr 3, 2019, at 10:02 PM, Hanjun Guo wrote:
>
> Hi Alex,
>
> On 2019/3/29 23:20, Alex Kogan wrote:
>> +
>> +static __always_inline void cna_init_node(struct mcs_spinlock *node, int
>> cpuid,
>> + u32 tail)
>> +{
>> +if
Hi Alex,
On 2019/3/29 23:20, Alex Kogan wrote:
> +
> +static __always_inline void cna_init_node(struct mcs_spinlock *node, int
> cpuid,
> + u32 tail)
> +{
> + if (decode_numa_node(node->node_and_count) == -1)
> + store_numa_node(node,
On 04/03/2019 01:16 PM, Peter Zijlstra wrote:
> On Wed, Apr 03, 2019 at 12:33:20PM -0400, Waiman Long wrote:
>> static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32
>> val)
>> {
>> if (static_branch_unlikely(_numa_spinlock))
>>
On Wed, Apr 03, 2019 at 12:33:20PM -0400, Waiman Long wrote:
> static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32
> val)
> {
> if (static_branch_unlikely(_numa_spinlock))
> numa_queued_spin_lock_slowpath(lock, val);
> else
>
On 04/02/2019 05:43 AM, Peter Zijlstra wrote:
> On Mon, Apr 01, 2019 at 10:36:19AM -0400, Waiman Long wrote:
>> On 03/29/2019 11:20 AM, Alex Kogan wrote:
>>> +config NUMA_AWARE_SPINLOCKS
>>> + bool "Numa-aware spinlocks"
>>> + depends on NUMA
>>> + default y
>>> + help
>>> + Introduce
On Wed, Apr 03, 2019 at 11:53:53AM -0400, Alex Kogan wrote:
> > One thing we could maybe do is change locked and count to u8, then your
> > overlay structure could be something like:
> >
> > struct mcs_spinlock {
> > struct mcs_spinlock *next;
> > u8 locked;
> > u8 count;
> > };
> I
On Wed, Apr 03, 2019 at 11:39:09AM -0400, Alex Kogan wrote:
> >> The patch that I am looking for is to have a separate
> >> numa_queued_spinlock_slowpath() that coexists with
> >> native_queued_spinlock_slowpath() and
> >> paravirt_queued_spinlock_slowpath(). At boot time, we select the most
> >>
> On Apr 1, 2019, at 5:33 AM, Peter Zijlstra wrote:
>
> On Mon, Apr 01, 2019 at 11:06:53AM +0200, Peter Zijlstra wrote:
>> On Fri, Mar 29, 2019 at 11:20:04AM -0400, Alex Kogan wrote:
>>> diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
>>> index
On 04/03/2019 11:39 AM, Alex Kogan wrote:
> Peter, Longman, many thanks for your detailed comments!
>
> A few follow-up questions are inlined below.
>
>> On Apr 2, 2019, at 5:43 AM, Peter Zijlstra wrote:
>>
>> On Mon, Apr 01, 2019 at 10:36:19AM -0400, Waiman Long wrote:
>>> On 03/29/2019 11:20
Peter, Longman, many thanks for your detailed comments!
A few follow-up questions are inlined below.
> On Apr 2, 2019, at 5:43 AM, Peter Zijlstra wrote:
>
> On Mon, Apr 01, 2019 at 10:36:19AM -0400, Waiman Long wrote:
>> On 03/29/2019 11:20 AM, Alex Kogan wrote:
>>> +config
On Mon, Apr 01, 2019 at 10:36:19AM -0400, Waiman Long wrote:
> On 03/29/2019 11:20 AM, Alex Kogan wrote:
> > +config NUMA_AWARE_SPINLOCKS
> > + bool "Numa-aware spinlocks"
> > + depends on NUMA
> > + default y
> > + help
> > + Introduce NUMA (Non Uniform Memory Access) awareness into
>
On 03/29/2019 11:20 AM, Alex Kogan wrote:
> In CNA, spinning threads are organized in two queues, a main queue for
> threads running on the same node as the current lock holder, and a
> secondary queue for threads running on other nodes. At the unlock time,
> the lock holder scans the main queue
On Mon, Apr 01, 2019 at 11:06:53AM +0200, Peter Zijlstra wrote:
> On Fri, Mar 29, 2019 at 11:20:04AM -0400, Alex Kogan wrote:
> > diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
> > index bc6d3244e1af..71ee4b64c5d4 100644
> > --- a/kernel/locking/mcs_spinlock.h
> > +++
On Fri, Mar 29, 2019 at 11:20:04AM -0400, Alex Kogan wrote:
> +static inline void pass_mcs_lock(struct mcs_spinlock *node,
> + struct mcs_spinlock *next)
> +{
> + struct mcs_spinlock *succ = NULL;
> +
> + succ = find_successor(node);
> +
> + if (succ) {
> +
On Fri, Mar 29, 2019 at 11:20:04AM -0400, Alex Kogan wrote:
> diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
> index bc6d3244e1af..71ee4b64c5d4 100644
> --- a/kernel/locking/mcs_spinlock.h
> +++ b/kernel/locking/mcs_spinlock.h
> @@ -17,8 +17,18 @@
>
> struct
In CNA, spinning threads are organized in two queues, a main queue for
threads running on the same node as the current lock holder, and a
secondary queue for threads running on other nodes. At the unlock time,
the lock holder scans the main queue looking for a thread running on
the same node. If
27 matches
Mail list logo