On Thu, Apr 18, 2019 at 11:15:33AM -0400, Waiman Long wrote:
> On 04/18/2019 09:06 AM, Peter Zijlstra wrote:
> >> + /*
> >> + * Check time threshold every 16 iterations to
> >> + * avoid calling sched_clock() too frequently.
> >> +
On 04/19/2019 10:33 AM, Waiman Long wrote:
> On 04/19/2019 03:56 AM, Peter Zijlstra wrote:
>> On Thu, Apr 18, 2019 at 11:15:33AM -0400, Waiman Long wrote:
>>> On 04/18/2019 09:06 AM, Peter Zijlstra wrote:
> + /*
> + * Check time threshold every 16
On 04/19/2019 03:56 AM, Peter Zijlstra wrote:
> On Thu, Apr 18, 2019 at 11:15:33AM -0400, Waiman Long wrote:
>> On 04/18/2019 09:06 AM, Peter Zijlstra wrote:
+ /*
+ * Check time threshold every 16 iterations to
+ * avoid calling
On 04/18/2019 09:06 AM, Peter Zijlstra wrote:
> So I really dislike time based spinning, and we've always rejected it
> before.
>
> On Sat, Apr 13, 2019 at 01:22:55PM -0400, Waiman Long wrote:
>
>> +static inline u64 rwsem_rspin_threshold(struct rw_semaphore *sem)
>> +{
>> +long count =
So I really dislike time based spinning, and we've always rejected it
before.
On Sat, Apr 13, 2019 at 01:22:55PM -0400, Waiman Long wrote:
> +static inline u64 rwsem_rspin_threshold(struct rw_semaphore *sem)
> +{
> + long count = atomic_long_read(>count);
> + int reader_cnt =
When the rwsem is owned by reader, writers stop optimistic spinning
simply because there is no easy way to figure out if all the readers
are actively running or not. However, there are scenarios where
the readers are unlikely to sleep and optimistic spinning can help
performance.
This patch
6 matches
Mail list logo