On Wed, Apr 24, 2019 at 10:02 AM Peter Zijlstra wrote:
>
> > For an uncontended rwsem, this offers no real benefit. Adding
> > preempt_disable() is more complicated than I originally thought.
>
> I'm not sure I get your objection?
I'm not sure it's an objection, but I do think that it's sad if
On 4/24/19 1:01 PM, Peter Zijlstra wrote:
> On Wed, Apr 24, 2019 at 12:49:05PM -0400, Waiman Long wrote:
>> On 4/24/19 3:09 AM, Peter Zijlstra wrote:
>>> On Tue, Apr 23, 2019 at 03:12:16PM -0400, Waiman Long wrote:
That is true in general, but doing preempt_disable/enable across
function
On Wed, Apr 24, 2019 at 12:49:05PM -0400, Waiman Long wrote:
> On 4/24/19 3:09 AM, Peter Zijlstra wrote:
> > On Tue, Apr 23, 2019 at 03:12:16PM -0400, Waiman Long wrote:
> >> That is true in general, but doing preempt_disable/enable across
> >> function boundary is ugly and prone to further
On 4/24/19 3:09 AM, Peter Zijlstra wrote:
> On Tue, Apr 23, 2019 at 03:12:16PM -0400, Waiman Long wrote:
>> That is true in general, but doing preempt_disable/enable across
>> function boundary is ugly and prone to further problems down the road.
> We do worse things in this code, and the thing
On Tue, Apr 23, 2019 at 03:12:16PM -0400, Waiman Long wrote:
> That is true in general, but doing preempt_disable/enable across
> function boundary is ugly and prone to further problems down the road.
We do worse things in this code, and the thing Linus proposes is
actually quite simple,
On 4/23/19 3:34 PM, Peter Zijlstra wrote:
> On Tue, Apr 23, 2019 at 03:12:16PM -0400, Waiman Long wrote:
>> On 4/23/19 12:27 PM, Linus Torvalds wrote:
>>> On Tue, Apr 23, 2019 at 7:17 AM Peter Zijlstra wrote:
I'm not aware of an architecture where disabling interrupts is faster
than
On Tue, Apr 23, 2019 at 03:12:16PM -0400, Waiman Long wrote:
> On 4/23/19 12:27 PM, Linus Torvalds wrote:
> > On Tue, Apr 23, 2019 at 7:17 AM Peter Zijlstra wrote:
> >> I'm not aware of an architecture where disabling interrupts is faster
> >> than disabling preemption.
> > I don't thin kit ever
On 4/23/19 12:27 PM, Linus Torvalds wrote:
> On Tue, Apr 23, 2019 at 7:17 AM Peter Zijlstra wrote:
>> I'm not aware of an architecture where disabling interrupts is faster
>> than disabling preemption.
> I don't thin kit ever is, but I'd worry a bit about the
> preempt_enable() just because it
On Tue, Apr 23, 2019 at 7:17 AM Peter Zijlstra wrote:
>
> I'm not aware of an architecture where disabling interrupts is faster
> than disabling preemption.
I don't thin kit ever is, but I'd worry a bit about the
preempt_enable() just because it also checks if need_resched() is true
when
On 4/23/19 10:17 AM, Peter Zijlstra wrote:
> On Sun, Apr 21, 2019 at 05:07:56PM -0400, Waiman Long wrote:
>
>> How about the following chunks to disable preemption temporarily for the
>> increment-check-decrement sequence?
>>
>> diff --git a/include/linux/preempt.h b/include/linux/preempt.h
>>
On Sun, Apr 21, 2019 at 05:07:56PM -0400, Waiman Long wrote:
> How about the following chunks to disable preemption temporarily for the
> increment-check-decrement sequence?
>
> diff --git a/include/linux/preempt.h b/include/linux/preempt.h
> index dd92b1a93919..4cc03ac66e13 100644
> ---
On 4/19/19 3:39 PM, Waiman Long wrote:
> On 04/19/2019 09:15 AM, Peter Zijlstra wrote:
>> On Fri, Apr 19, 2019 at 03:03:04PM +0200, Peter Zijlstra wrote:
>>> On Fri, Apr 19, 2019 at 02:02:07PM +0200, Peter Zijlstra wrote:
On Fri, Apr 19, 2019 at 12:26:47PM +0200, Peter Zijlstra wrote:
> I
On Fri, Apr 19, 2019 at 02:02:07PM +0200, Peter Zijlstra wrote:
> On Fri, Apr 19, 2019 at 12:26:47PM +0200, Peter Zijlstra wrote:
> > I thought of a horrible horrible alternative:
>
> Hurm, that's broken as heck. Let me try again.
So I can't make that scheme work, it all ends up wanting to have
On Fri, Apr 19, 2019 at 12:26:47PM +0200, Peter Zijlstra wrote:
> I thought of a horrible horrible alternative:
Hurm, that's broken as heck. Let me try again.
On 04/19/2019 09:15 AM, Peter Zijlstra wrote:
> On Fri, Apr 19, 2019 at 03:03:04PM +0200, Peter Zijlstra wrote:
>> On Fri, Apr 19, 2019 at 02:02:07PM +0200, Peter Zijlstra wrote:
>>> On Fri, Apr 19, 2019 at 12:26:47PM +0200, Peter Zijlstra wrote:
I thought of a horrible horrible alternative:
On Thu, Apr 18, 2019 at 10:54:19AM -0400, Waiman Long wrote:
> On 04/18/2019 10:40 AM, Peter Zijlstra wrote:
> > Having more CPUs than that is not impossible these days.
> >
>
> Having more than 32k CPUs contending for the same cacheline will be
> horribly slow.
No question about that.
> >>
On Fri, Apr 19, 2019 at 03:03:04PM +0200, Peter Zijlstra wrote:
> On Fri, Apr 19, 2019 at 02:02:07PM +0200, Peter Zijlstra wrote:
> > On Fri, Apr 19, 2019 at 12:26:47PM +0200, Peter Zijlstra wrote:
> > > I thought of a horrible horrible alternative:
> >
> > Hurm, that's broken as heck. Let me try
On 04/18/2019 10:40 AM, Peter Zijlstra wrote:
> On Thu, Apr 18, 2019 at 10:08:28AM -0400, Waiman Long wrote:
>> On 04/18/2019 09:51 AM, Peter Zijlstra wrote:
>>> On Sat, Apr 13, 2019 at 01:22:57PM -0400, Waiman Long wrote:
inline void __down_read(struct rw_semaphore *sem)
{
+
On Thu, Apr 18, 2019 at 10:08:28AM -0400, Waiman Long wrote:
> On 04/18/2019 09:51 AM, Peter Zijlstra wrote:
> > On Sat, Apr 13, 2019 at 01:22:57PM -0400, Waiman Long wrote:
> >> inline void __down_read(struct rw_semaphore *sem)
> >> {
> >> + long count =
On Thu, Apr 18, 2019 at 10:08:28AM -0400, Waiman Long wrote:
> On 04/18/2019 09:51 AM, Peter Zijlstra wrote:
> > On Sat, Apr 13, 2019 at 01:22:57PM -0400, Waiman Long wrote:
> >> inline void __down_read(struct rw_semaphore *sem)
> >> {
> >> + long count =
On 04/18/2019 09:51 AM, Peter Zijlstra wrote:
> On Sat, Apr 13, 2019 at 01:22:57PM -0400, Waiman Long wrote:
>> inline void __down_read(struct rw_semaphore *sem)
>> {
>> +long count = atomic_long_fetch_add_acquire(RWSEM_READER_BIAS,
>> + >count);
On Sat, Apr 13, 2019 at 01:22:57PM -0400, Waiman Long wrote:
> inline void __down_read(struct rw_semaphore *sem)
> {
> + long count = atomic_long_fetch_add_acquire(RWSEM_READER_BIAS,
> +>count);
> +
> + if (unlikely(count &
The upper bits of the count field is used as reader count. When
sufficient number of active readers are present, the most significant
bit will be set and the count becomes negative. If the number of active
readers keep on piling up, we may eventually overflow the reader counts.
This is not likely
23 matches
Mail list logo