> On Apr 13, 2021, at 8:03 AM, Peter Zijlstra wrote:
>
> On Thu, Apr 01, 2021 at 11:31:54AM -0400, Alex Kogan wrote:
>
>> @@ -49,13 +55,33 @@ struct cna_node {
>> u16 real_numa_node;
>> u32 encoded_
Hi, Andreas.
Thanks for the great questions.
> On Apr 14, 2021, at 3:47 AM, Andreas Herrmann wrote:
>
> On Thu, Apr 01, 2021 at 11:31:56AM -0400, Alex Kogan wrote:
>> This performance optimization chooses probabilistically to avoid moving
>> threads from the main queue i
> On Apr 13, 2021, at 5:22 PM, Andi Kleen wrote:
>
>>> ms granularity seems very coarse grained for this. Surely
>>> at some point of spinning you can afford a ktime_get? But ok.
>> We are reading time when we are at the head of the (main) queue, but
>> don’t have the lock yet. Not sure about
Peter, thanks for all the comments and suggestions!
> On Apr 13, 2021, at 7:30 AM, Peter Zijlstra wrote:
>
> On Thu, Apr 01, 2021 at 11:31:53AM -0400, Alex Kogan wrote:
>
>> +/*
>> + * cna_splice_tail -- splice the next node from the primary queue onto
&g
Hi, Andi.
Thanks for your comments!
> On Apr 13, 2021, at 2:03 AM, Andi Kleen wrote:
>
> Alex Kogan writes:
>>
>> +numa_spinlock_threshold=[NUMA, PV_OPS]
>> +Set the time threshold in milliseconds for the
>> +
Prohibit moving certain threads (e.g., in irq and nmi contexts)
to the secondary queue. Those prioritized threads will always stay
in the primary queue, and so will have a shorter wait time for the lock.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
kernel
Move some of the code manipulating the spin lock into separate functions.
This would allow easier integration of alternative ways to manipulate
that lock.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
kernel/locking/qspinlock.c | 38
introduce
any extra delays for threads waiting in that queue once it is created.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
kernel/locking/qspinlock_cna.h | 39 ++
1 file changed, 39 insertions(+)
diff --git a/kernel/locking
The mcs unlock macro (arch_mcs_lock_handoff) should accept the value to be
stored into the lock argument as another argument. This allows using the
same macro in cases where the value to be stored when passing the lock is
different from 1.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
.131 (0.025) / 1.867
Further comments are welcome and appreciated.
Alex Kogan (6):
locking/qspinlock: Rename mcs lock/unlock macros and make them more
generic
locking/qspinlock: Refactor the qspinlock slow path
locking/qspinlock: Introduce CNA into the slow path of qspinlock
locking/
() is available.) This default behavior can be
overridden with the new kernel boot command-line option
"numa_spinlock=on/off" (default is "auto").
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
.../admin-guide/kernel-parameters.txt
t;. The ms value is translated internally to the
nearest rounded-up jiffies.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
.../admin-guide/kernel-parameters.txt | 9 ++
kernel/locking/qspinlock_cna.h| 96 ---
2 files c
> On Mar 22, 2021, at 7:15 PM, Alex Kogan wrote:
>
> Many thanks to Zhengjun Xing for the help in reproducing the issue.
>
> On our system, the regression is less than 7% (the numbers are below),
> however,
> at least at the full capacity, the numbers are very stable.
The mcs unlock macro (arch_mcs_lock_handoff) should accept the value to be
stored into the lock argument as another argument. This allows using the
same macro in cases where the value to be stored when passing the lock is
different from 1.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
() is available.) This default behavior can be
overridden with the new kernel boot command-line option
"numa_spinlock=on/off" (default is "auto").
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
.../admin-guide/kernel-parameters.txt
Prohibit moving certain threads (e.g., in irq and nmi contexts)
to the secondary queue. Those prioritized threads will always stay
in the primary queue, and so will have a shorter wait time for the lock.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
kernel
Move some of the code manipulating the spin lock into separate functions.
This would allow easier integration of alternative ways to manipulate
that lock.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
kernel/locking/qspinlock.c | 38
introduce
any extra delays for threads waiting in that queue once it is created.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
kernel/locking/qspinlock_cna.h | 39 +-
1 file changed, 38 insertions(+), 1 deletion(-)
diff --git a
t;. The ms value is translated internally to the
nearest rounded-up jiffies.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
.../admin-guide/kernel-parameters.txt | 9 ++
kernel/locking/qspinlock_cna.h| 95 ---
2 files c
1.115 (0.022) / 1.850
Further comments are welcome and appreciated.
Alex Kogan (6):
locking/qspinlock: Rename mcs lock/unlock macros and make them more
generic
locking/qspinlock: Refactor the qspinlock slow path
locking/qspinlock: Introduce CNA into the slow path of qspinlock
locking/q
rtain threads between waiting queues in CNA")
url:
https://urldefense.com/v3/__https://github.com/0day-ci/linux/commits/Alex-Kogan/Add-NUMA-awareness-to-qspinlock/20201118-072506__;!!GqivPVa7Brio!J6uFF5neDgzw1T5v2mMXBTe1dyDbcWqAn9mi-YuDyYUiT8W303JqK82CZiGJB2Kl$
base:
https://urldefense.com/
1.196 (0.019) 1.194
32 0.726 (0.034) 1.163 (0.026) 1.601
36 0.691 (0.030) 1.163 (0.020) 1.683
72 0.627 (0.014) 1.136 (0.022) 1.812
108 0.613 (0.014) 1.143 (0.023) 1.865
142 0.610 (0.014) 1.120 (0.018) 1.838
Further comments are welcome and appreciated.
Alex Kogan (5):
locking/qspi
Move some of the code manipulating the spin lock into separate functions.
This would allow easier integration of alternative ways to manipulate
that lock.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
kernel/locking/qspinlock.c | 38
Prohibit moving certain threads (e.g., in irq and nmi contexts)
to the secondary queue. Those prioritized threads will always stay
in the primary queue, and so will have a shorter wait time for the lock.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
kernel
() is available.) This default behavior can be
overridden with the new kernel boot command-line option
"numa_spinlock=on/off" (default is "auto").
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
Documentation/admin-guide/kernel-parameters.t
t;. The ms value is translated internally to the
nearest rounded-up jiffies.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
Documentation/admin-guide/kernel-parameters.txt | 9 +++
kernel/locking/qspinlock_cna.h | 95 +---
The mcs unlock macro (arch_mcs_lock_handoff) should accept the value to be
stored into the lock argument as another argument. This allows using the
same macro in cases where the value to be stored when passing the lock is
different from 1.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
> On Sep 15, 2020, at 3:24 PM, Randy Dunlap wrote:
>
> Hi,
>
> Entries in the kernel-parameters.txt file should be kept in alphabetical order
> mostly (there are a few exceptions where related options are kept together).
>
>
>
> On 9/15/20 11:05 AM, Alex Kog
Move some of the code manipulating the spin lock into separate functions.
This would allow easier integration of alternative ways to manipulate
that lock.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
kernel/locking/qspinlock.c | 38
Prohibit moving certain threads (e.g., in irq and nmi contexts)
to the secondary queue. Those prioritized threads will always stay
in the primary queue, and so will have a shorter wait time for the lock.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
kernel
t;. The ms value is translated internally to the
nearest rounded-up jiffies.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
.../admin-guide/kernel-parameters.txt | 9 ++
kernel/locking/qspinlock_cna.h| 95 ---
2 files c
.145) 1.207 (0.024) 1.173
32 0.721 (0.037) 1.158 (0.026) 1.605
36 0.690 (0.043) 1.159 (0.028) 1.680
72 0.622 (0.016) 1.136 (0.020) 1.826
108 0.608 (0.013) 1.144 (0.017) 1.882
142 0.602 (0.014) 1.122 (0.020) 1.864
Further comments are welcome and appreciated.
Alex Kogan (5):
loc
The mcs unlock macro (arch_mcs_lock_handoff) should accept the value to be
stored into the lock argument as another argument. This allows using the
same macro in cases where the value to be stored when passing the lock is
different from 1.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
() is available.) This default behavior can be
overridden with the new kernel boot command-line option
"numa_spinlock=on/off" (default is "auto").
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Reviewed-by: Waiman Long
---
.../admin-guide/kernel-parameters.txt
> On Jul 28, 2020, at 4:00 PM, Waiman Long wrote:
>
> On 4/3/20 4:59 PM, Alex Kogan wrote:
>> In CNA, spinning threads are organized in two queues, a primary queue for
>> threads running on the same node as the current lock holder, and a
>> secondary queue for thr
Hi, Peter, Longman (and everyone on this list),
Hope you are doing well.
I was wondering whether you have had a chance to review this series,
and have any further comments.
Thanks,
— Alex
> On Apr 3, 2020, at 4:59 PM, Alex Kogan wrote:
>
> Changes from v9:
>
&
> On Oct 18, 2019, at 12:03 PM, Waiman Long wrote:
>
> On 10/16/19 12:29 AM, Alex Kogan wrote:
>> +static inline void cna_pass_lock(struct mcs_spinlock *node,
>> + struct mcs_spinlock *next)
>> +{
>> +struct cna_node *cn = (struc
> On Oct 16, 2019, at 4:57 PM, Waiman Long wrote:
>
> On 10/16/19 12:29 AM, Alex Kogan wrote:
>> In CNA, spinning threads are organized in two queues, a main queue for
>> threads running on the same node as the current lock holder, and a
>> secondary queue for thr
This optimization reduces the probability threads will be shuffled between
the main and secondary queues when the secondary queue is empty.
It is helpful when the lock is only lightly contended.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
kernel/locking/qspinlock_cna.h | 30
able in previous revisions
of the series.
Further comments are welcome and appreciated.
Alex Kogan (5):
locking/qspinlock: Rename mcs lock/unlock macros and make them more
generic
locking/qspinlock: Refactor the qspinlock slow path
locking/qspinlock: Introduce CNA into the slow path of
overridden with the new kernel boot command-line option
"numa_spinlock=on/off" (default is "auto").
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
arch/x86/Kconfig | 19 +++
arch/x86/include/asm/qspinlock.h | 4 +
arch/x86/kernel/alternative.c|
Keep track of the number of intra-node lock handoffs, and force
inter-node handoff once this number reaches a preset threshold.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
kernel/locking/qspinlock.c | 3 +++
kernel/locking/qspinlock_cna.h | 30
The mcs unlock macro (arch_mcs_pass_lock) should accept the value to be
stored into the lock argument as another argument. This allows using the
same macro in cases where the value to be stored when passing the lock is
different from 1.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
Move some of the code manipulating the spin lock into separate functions.
This would allow easier integration of alternative ways to manipulate
that lock.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
kernel/locking/qspinlock.c | 38 --
1 file
>> +/*
>> + * cna_try_find_next - scan the main waiting queue looking for the first
>> + * thread running on the same NUMA node as the lock holder. If found (call
>> it
>> + * thread T), move all threads in the main queue between the lock holder and
>> + * T to the end of the secondary queue and r
This optimization reduces the probability threads will be shuffled between
the main and secondary queues when the secondary queue is empty.
It is helpful when the lock is only lightly contended.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
kernel/locking/qspinlock_cna.h | 20
Move some of the code manipulating the spin lock into separate functions.
This would allow easier integration of alternative ways to manipulate
that lock.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
kernel/locking/qspinlock.c | 38 --
1 file
The new macro should accept the value to be stored into the lock argument
as another argument. This allows using the same macro in cases where the
value to be stored when passing the lock is different from 1.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
arch/arm/include/asm
0.042) 1.152 (0.086) 1.585
72 0.639 (0.028) 1.192 (0.023) 1.863
108 0.621 (0.024) 1.181 (0.028) 1.902
142 0.604 (0.015) 1.158 (0.028) 1.919
Further comments are welcome and appreciated.
Alex Kogan (5):
locking/qspinlock: Rename arch_mcs_spin_unlock_contended to
arch_mcs_pass_lock and
as well. However, this should be
resolved once static_call() is available.) This default behavior can be
overridden with the new kernel boot command-line option
"numa_spinlock=on/off" (default is "auto").
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
-
. Thus, assuming no failures while threads hold the
lock, every thread would be able to acquire the lock after a bounded
number of lock transitions, with high probability.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
kernel/locking/qspinlock_cna.h | 35
> On Jul 16, 2019, at 10:50 AM, Waiman Long wrote:
>
> On 7/16/19 10:29 AM, Alex Kogan wrote:
>>
>>> On Jul 15, 2019, at 7:22 PM, Waiman Long >> <mailto:long...@redhat.com>> wrote:
>>>
>>> On 7/15/19 5:30 PM, Waiman Long wrote:
> On Jul 17, 2019, at 4:59 AM, Peter Zijlstra wrote:
>
> On Wed, Jul 17, 2019 at 10:39:44AM +0200, Peter Zijlstra wrote:
>> On Tue, Jul 16, 2019 at 08:47:24PM +0200, Peter Zijlstra wrote:
>
>>> My primary concern was readability; I find the above suggestion much
>>> more readable. Maybe it can
>> *mcs_node
>> * ++ ++ ++
>> * | next | ---> |next| -> ... |next| -> NULL [Main queue]
>> * | locked | -+ ++ ++
>> * ++ |
>> * | +-+ ++
>> * +-> |mcs::next| -> ... |n
Hi, Peter.
Thanks for the review and all the suggestions!
A couple of comments are inlined below.
> On Jul 16, 2019, at 11:50 AM, Peter Zijlstra wrote:
>
> On Mon, Jul 15, 2019 at 03:25:34PM -0400, Alex Kogan wrote:
>> +static struct cna_node *find_successor(struct m
On Jul 16, 2019, at 6:20 AM, Peter Zijlstra wrote:
>
> On Mon, Jul 15, 2019 at 03:25:33PM -0400, Alex Kogan wrote:
>
>> +/*
>> + * set_locked_empty_mcs - Try to set the spinlock value to _Q_LOCKED_VAL,
>> + * and by doing that unlock the MCS lock when its waiting
> On Jul 16, 2019, at 7:05 AM, Peter Zijlstra wrote:
>
> On Mon, Jul 15, 2019 at 03:25:34PM -0400, Alex Kogan wrote:
>> +/**
>> + * find_successor - Scan the main waiting queue looking for the first
>> + * thread running on the same node as the lock holder. If fou
> On Jul 15, 2019, at 5:30 PM, Waiman Long wrote:
>
> On 7/15/19 3:25 PM, Alex Kogan wrote:
>> In CNA, spinning threads are organized in two queues, a main queue for
>> threads running on the same node as the current lock holder, and a
>> secondary queue for threads
This optimization reduces the probability threads will be shuffled between
the main and secondary queues when the secondary queue is empty.
It is helpful when the lock is only lightly contended.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
kernel/locking/qspinlock_cna.h | 20
. Thus, assuming no failures while threads hold the
lock, every thread would be able to acquire the lock after a bounded
number of lock transitions, with high probability.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
kernel/locking/qspinlock_cna.h | 36
0.728 (0.023) 1.011 (0.117) 1.389
36 0.720 (0.038) 1.073 (0.127) 1.491
72 0.652 (0.018) 1.195 (0.017) 1.833
108 0.624 (0.016) 1.178 (0.028) 1.888
142 0.604 (0.015) 1.163 (0.024) 1.925
Further comments are welcome and appreciated.
Alex Kogan (5):
locking/qspinlock: Make arch_mc
resolved once static_call() is available.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
arch/x86/Kconfig | 18 +
arch/x86/include/asm/qspinlock.h | 4 +
arch/x86/kernel/alternative.c| 12 +++
kernel/locking/mcs_spinlock.h| 2 +-
kernel/locking/qspinlock.c
Move some of the code manipulating the spin lock into separate functions.
This would allow easier integration of alternative ways to manipulate
that lock.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
kernel/locking/qspinlock.c | 40 ++--
1 file
The arch_mcs_spin_unlock_contended macro should accept the value to be
stored into the lock argument as another argument. This allows using the
same macro in cases where the value to be stored is different from 1.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
arch/arm/include/asm
Hi, Wei.
> On Jun 11, 2019, at 12:22 AM, liwei (GF) wrote:
>
> Hi Alex,
>
> On 2019/3/29 23:20, Alex Kogan wrote:
>> In CNA, spinning threads are organized in two queues, a main queue for
>> threads running on the same node as the current lock holder, and a
>
>> Also, the paravirt code is under arch/x86, while CNA is generic (not
>> x86-specific). Do you still want to see CNA-related patching residing
>> under arch/x86?
>>
>> We still need a config option (something like NUMA_AWARE_SPINLOCKS) to
>> enable CNA patching under this config only, correct?
Hi, Peter, Longman,
> On Apr 3, 2019, at 12:01 PM, Peter Zijlstra wrote:
>
> On Wed, Apr 03, 2019 at 11:39:09AM -0400, Alex Kogan wrote:
>
>>>> The patch that I am looking for is to have a separate
>>>> numa_queued_spinlock_slowpath() that coexists with
Hi, Hanjun.
> On Apr 3, 2019, at 10:02 PM, Hanjun Guo wrote:
>
> Hi Alex,
>
> On 2019/3/29 23:20, Alex Kogan wrote:
>> +
>> +static __always_inline void cna_init_node(struct mcs_spinlock *node, int
>> cpuid,
>> +
> On Apr 1, 2019, at 5:09 AM, Peter Zijlstra wrote:
>
> On Fri, Mar 29, 2019 at 11:20:01AM -0400, Alex Kogan wrote:
>> The following locktorture results are from an Oracle X5-4 server
>> (four Intel Xeon E7-8895 v3 @ 2.60GHz sockets with 18 hyperthreaded
>> c
> On Apr 2, 2019, at 6:37 AM, Peter Zijlstra wrote:
>
> On Fri, Mar 29, 2019 at 11:20:05AM -0400, Alex Kogan wrote:
>> @@ -25,6 +29,18 @@
>>
>> #define MCS_NODE(ptr) ((struct mcs_spinlock *)(ptr))
>>
>> +/* Per-CPU pseudo-random number seed
> On Apr 1, 2019, at 5:33 AM, Peter Zijlstra wrote:
>
> On Mon, Apr 01, 2019 at 11:06:53AM +0200, Peter Zijlstra wrote:
>> On Fri, Mar 29, 2019 at 11:20:04AM -0400, Alex Kogan wrote:
>>> diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.
Peter, Longman, many thanks for your detailed comments!
A few follow-up questions are inlined below.
> On Apr 2, 2019, at 5:43 AM, Peter Zijlstra wrote:
>
> On Mon, Apr 01, 2019 at 10:36:19AM -0400, Waiman Long wrote:
>> On 03/29/2019 11:20 AM, Alex Kogan wro
.138) 4.010 (0.168) 0.976
32 2.674 (0.125) 2.625 (0.171) 3.958 (0.156) 1.480
36 2.622 (0.107) 2.553 (0.150) 3.978 (0.116) 1.517
72 2.009 (0.090) 1.998 (0.092) 3.932 (0.114) 1.957
108 2.154 (0.069) 2.089 (0.090) 3.870 (0.081) 1.797
142 1.953 (0.106) 1.943 (0.111) 3.853 (0.100)
Move some of the code manipulating MCS nodes into separate functions.
This would allow easier integration of alternative ways to manipulate
those nodes.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
kernel/locking/qspinlock.c | 48 +++---
1
This optimization reduces the probability threads will be shuffled between
the main and secondary queues when the secondary queue is empty.
It is helpful when the lock is only lightly contended.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
kernel/locking/qspinlock_cna.h | 21
The arch_mcs_spin_unlock_contended macro should accept the value to be
stored into the lock argument as another argument. This allows using the
same macro in cases where the value to be stored is different from 1.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
arch/arm/include/asm
. Thus, assuming no failures while threads hold the
lock, every thread would be able to acquire the lock after a bounded
number of lock transitions, with high probability.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
kernel/locking/qspinlock_cna.h | 55
controlled via a new configuration option
(NUMA_AWARE_SPINLOCKS), which is enabled by default if NUMA is enabled.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
arch/x86/Kconfig | 14 +++
include/asm-generic/qspinlock_types.h | 13 +++
kernel/locking
[ Resending after correcting an issue with the included URL and correcting a
typo
in Waiman’s name — sorry about that! ]
> On Feb 5, 2019, at 4:22 AM, Peter Zijlstra wrote:
>
> On Mon, Feb 04, 2019 at 10:35:09PM -0500, Alex Kogan wrote:
>>
>>> On Jan 31, 2019, at
> On Jan 31, 2019, at 5:00 AM, Peter Zijlstra wrote:
>
> On Wed, Jan 30, 2019 at 10:01:35PM -0500, Alex Kogan wrote:
>> Choose the next lock holder among spinning threads running on the same
>> socket with high probability rather than always. With small probability,
&g
> On Jan 31, 2019, at 12:38 PM, Waiman Long wrote:
>
> On 01/30/2019 10:01 PM, Alex Kogan wrote:
>> In CNA, spinning threads are organized in two queues, a main queue for
>> threads running on the same socket as the current lock holder, and a
>> secondary queue f
> On Jan 31, 2019, at 4:56 AM, Peter Zijlstra wrote:
>
> On Wed, Jan 30, 2019 at 10:01:32PM -0500, Alex Kogan wrote:
>> Lock throughput can be increased by handing a lock to a waiter on the
>> same NUMA socket as the lock holder, provided care is taken to avoid
>&g
starvation by continuously
passing the lock to threads running on the same socket. This issue
will be addressed later in the series.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
include/asm-generic/qspinlock_types.h | 10 +++
kernel/locking/mcs_spinlock.h | 15 +++-
kernel/locking
certain overhead over the probabilistic variant.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
kernel/locking/qspinlock.c | 53 --
1 file changed, 51 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking
The arch_mcs_spin_unlock_contended macro should accept the value to be
stored into the lock argument as another argument. This allows using the
same macro in cases where the value to be stored is different from 1.
Signed-off-by: Alex Kogan
Reviewed-by: Steve Sistare
---
arch/arm/include/asm
appreciated.
Alex Kogan (3):
locking/qspinlock: Make arch_mcs_spin_unlock_contended more generic
locking/qspinlock: Introduce CNA into the slow path of qspinlock
locking/qspinlock: Introduce starvation avoidance into CNA
arch/arm/include/asm/mcs_spinlock.h | 4 +-
include/asm-generic
86 matches
Mail list logo