On 9/6/2017 9:12 AM, Håkon Bugge wrote:
[...]
Hi Santosh,
Yes, I agree with accuracy of s_send_lock_queue_raced. But the main point is
that the existing code counts some partial share of when it is _not_ raced.
So, in the critical path, my patch adds one test_bit(), which hits the local
> On 6 Sep 2017, at 17:58, Santosh Shilimkar
> wrote:
>
> On 9/6/2017 8:29 AM, Håkon Bugge wrote:
>> In rds_send_xmit() there is logic to batch the sends. However, if
>> another thread has acquired the lock, it is considered a race and we
>> yield. The code incrementing the s_send_lock_queue_ra
On 9/6/2017 8:29 AM, Håkon Bugge wrote:
In rds_send_xmit() there is logic to batch the sends. However, if
another thread has acquired the lock, it is considered a race and we
yield. The code incrementing the s_send_lock_queue_raced statistics
counter did not count this event correctly.
This comm
In rds_send_xmit() there is logic to batch the sends. However, if
another thread has acquired the lock, it is considered a race and we
yield. The code incrementing the s_send_lock_queue_raced statistics
counter did not count this event correctly.
This commit removes a small race in determining the
4 matches
Mail list logo