On Wed, 2014-06-04 at 13:57 -0400, Andev wrote:
> Hi,
>
> On Tue, Apr 29, 2014 at 11:11 AM, Paul E. McKenney
> wrote:
> > On Mon, Apr 28, 2014 at 05:50:49PM -0700, Tim Chen wrote:
> >> On Mon, 2014-04-28 at 16:10 -0700, Paul E. McKenney wrote:
> >>
> >> > > +#ifdef CONFIG_SMP
> >> > > +static
Hi,
On Tue, Apr 29, 2014 at 11:11 AM, Paul E. McKenney
wrote:
> On Mon, Apr 28, 2014 at 05:50:49PM -0700, Tim Chen wrote:
>> On Mon, 2014-04-28 at 16:10 -0700, Paul E. McKenney wrote:
>>
>> > > +#ifdef CONFIG_SMP
>> > > +static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
>> > >
Hi,
On Tue, Apr 29, 2014 at 11:11 AM, Paul E. McKenney
paul...@linux.vnet.ibm.com wrote:
On Mon, Apr 28, 2014 at 05:50:49PM -0700, Tim Chen wrote:
On Mon, 2014-04-28 at 16:10 -0700, Paul E. McKenney wrote:
+#ifdef CONFIG_SMP
+static inline bool rwsem_can_spin_on_owner(struct rw_semaphore
On Wed, 2014-06-04 at 13:57 -0400, Andev wrote:
Hi,
On Tue, Apr 29, 2014 at 11:11 AM, Paul E. McKenney
paul...@linux.vnet.ibm.com wrote:
On Mon, Apr 28, 2014 at 05:50:49PM -0700, Tim Chen wrote:
On Mon, 2014-04-28 at 16:10 -0700, Paul E. McKenney wrote:
+#ifdef CONFIG_SMP
On Wed, 2014-04-30 at 14:06 -0700, Davidlohr Bueso wrote:
> > > > >
> > > > > if (count == RWSEM_WAITING_BIAS)
> > > > > old = cmpxchg(>count, count, count +
> > > > > RWSEM_ACTIVE_BIAS);
> > > > > else
> > > > > old =
On Wed, 2014-04-30 at 14:01 -0700, Tim Chen wrote:
> On Wed, 2014-04-30 at 11:08 -0700, Tim Chen wrote:
> > On Wed, 2014-04-30 at 20:04 +0200, Peter Zijlstra wrote:
> > > On Wed, Apr 30, 2014 at 10:50:09AM -0700, Tim Chen wrote:
> > > > On Wed, 2014-04-30 at 10:27 +0200, Peter Zijlstra wrote:
> >
On Wed, 2014-04-30 at 11:08 -0700, Tim Chen wrote:
> On Wed, 2014-04-30 at 20:04 +0200, Peter Zijlstra wrote:
> > On Wed, Apr 30, 2014 at 10:50:09AM -0700, Tim Chen wrote:
> > > On Wed, 2014-04-30 at 10:27 +0200, Peter Zijlstra wrote:
> > > > On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr
On Wed, 2014-04-30 at 20:04 +0200, Peter Zijlstra wrote:
> On Wed, Apr 30, 2014 at 10:50:09AM -0700, Tim Chen wrote:
> > On Wed, 2014-04-30 at 10:27 +0200, Peter Zijlstra wrote:
> > > On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
> > > > +/*
> > > > + * Try to acquire write lock
On Wed, Apr 30, 2014 at 10:50:09AM -0700, Tim Chen wrote:
> On Wed, 2014-04-30 at 10:27 +0200, Peter Zijlstra wrote:
> > On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
> > > +/*
> > > + * Try to acquire write lock before the writer has been put on wait
> > > queue.
> > > + */
>
On Wed, 2014-04-30 at 10:27 +0200, Peter Zijlstra wrote:
> On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
> > +/*
> > + * Try to acquire write lock before the writer has been put on wait queue.
> > + */
> > +static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore
On Wed, Apr 30, 2014 at 09:33:34AM -0700, Jason Low wrote:
> > static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
> > {
> > struct task_struct *owner;
> > bool on_cpu = false;
>
> Wouldn't we want to initialize on_cpu = true. For the !owner case, I
> would expect
On Wed, 2014-04-30 at 12:00 +0200, Peter Zijlstra wrote:
> On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
> > __visible
> > struct rw_semaphore __sched *rwsem_down_write_failed(struct rw_semaphore
> > *sem)
> > {
> > - long count, adjustment = -RWSEM_ACTIVE_WRITE_BIAS;
> >
On Wed, 2014-04-30 at 11:21 +0200, Peter Zijlstra wrote:
> On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
> > +#ifdef CONFIG_SMP
>
> > +#else
> > +static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
> > +{
> > + return false;
> > +}
> > +#endif
>
> On the mutex side
On Wed, Apr 30, 2014 at 09:32:17AM -0700, Davidlohr Bueso wrote:
> On Wed, 2014-04-30 at 11:21 +0200, Peter Zijlstra wrote:
> > On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
> > > +#ifdef CONFIG_SMP
> >
> > > +#else
> > > +static bool rwsem_optimistic_spin(struct rw_semaphore
On Wed, Apr 30, 2014 at 2:04 AM, Peter Zijlstra wrote:
> On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
>> +static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
>> +{
>> + int retval;
>
> And yet the return value is bool.
>
>> + struct task_struct *owner;
On Wed, 2014-04-30 at 11:14 +0200, Peter Zijlstra wrote:
> On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
> > +static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
> > +{
> > + int retval;
> > + struct task_struct *owner;
> > +
> > + rcu_read_lock();
> > +
On Wed, 2014-04-30 at 10:27 +0200, Peter Zijlstra wrote:
> On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
> > +/*
> > + * Try to acquire write lock before the writer has been put on wait queue.
> > + */
> > +static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
> __visible
> struct rw_semaphore __sched *rwsem_down_write_failed(struct rw_semaphore
> *sem)
> {
> - long count, adjustment = -RWSEM_ACTIVE_WRITE_BIAS;
> + long count;
> struct rwsem_waiter waiter;
> struct
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
> +#ifdef CONFIG_SMP
> +#else
> +static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
> +{
> + return false;
> +}
> +#endif
On the mutex side we guard this with MUTEX_SPIN_ON_OWNER, do we want to
use that here too?
--
To
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
> +static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
> +{
> + int retval;
> + struct task_struct *owner;
> +
> + rcu_read_lock();
> + owner = ACCESS_ONCE(sem->owner);
> +
> + /* Spin only if
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
> +static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
> +{
> + int retval;
And yet the return value is bool.
> + struct task_struct *owner;
> +
> + rcu_read_lock();
> + owner =
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
> +/*
> + * Try to acquire write lock before the writer has been put on wait queue.
> + */
> +static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
> +{
> + long count = ACCESS_ONCE(sem->count);
> +retry:
> +
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
> @@ -26,6 +27,10 @@ struct rw_semaphore {
> longcount;
> raw_spinlock_t wait_lock;
> struct list_headwait_list;
> +#ifdef CONFIG_SMP
> + struct task_struct *owner;
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
@@ -26,6 +27,10 @@ struct rw_semaphore {
longcount;
raw_spinlock_t wait_lock;
struct list_headwait_list;
+#ifdef CONFIG_SMP
+ struct task_struct *owner; /*
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
+/*
+ * Try to acquire write lock before the writer has been put on wait queue.
+ */
+static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
+{
+ long count = ACCESS_ONCE(sem-count);
+retry:
+ if
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
+static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
+{
+ int retval;
And yet the return value is bool.
+ struct task_struct *owner;
+
+ rcu_read_lock();
+ owner = ACCESS_ONCE(sem-owner);
+
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
+static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
+{
+ int retval;
+ struct task_struct *owner;
+
+ rcu_read_lock();
+ owner = ACCESS_ONCE(sem-owner);
+
+ /* Spin only if active writer
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
+#ifdef CONFIG_SMP
+#else
+static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
+{
+ return false;
+}
+#endif
On the mutex side we guard this with MUTEX_SPIN_ON_OWNER, do we want to
use that here too?
--
To
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
__visible
struct rw_semaphore __sched *rwsem_down_write_failed(struct rw_semaphore
*sem)
{
- long count, adjustment = -RWSEM_ACTIVE_WRITE_BIAS;
+ long count;
struct rwsem_waiter waiter;
struct
On Wed, 2014-04-30 at 10:27 +0200, Peter Zijlstra wrote:
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
+/*
+ * Try to acquire write lock before the writer has been put on wait queue.
+ */
+static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
+{
On Wed, 2014-04-30 at 11:14 +0200, Peter Zijlstra wrote:
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
+static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
+{
+ int retval;
+ struct task_struct *owner;
+
+ rcu_read_lock();
+ owner =
On Wed, Apr 30, 2014 at 2:04 AM, Peter Zijlstra pet...@infradead.org wrote:
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
+static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
+{
+ int retval;
And yet the return value is bool.
+ struct task_struct
On Wed, Apr 30, 2014 at 09:32:17AM -0700, Davidlohr Bueso wrote:
On Wed, 2014-04-30 at 11:21 +0200, Peter Zijlstra wrote:
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
+#ifdef CONFIG_SMP
+#else
+static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
+{
On Wed, 2014-04-30 at 11:21 +0200, Peter Zijlstra wrote:
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
+#ifdef CONFIG_SMP
+#else
+static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
+{
+ return false;
+}
+#endif
On the mutex side we guard this with
On Wed, 2014-04-30 at 12:00 +0200, Peter Zijlstra wrote:
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
__visible
struct rw_semaphore __sched *rwsem_down_write_failed(struct rw_semaphore
*sem)
{
- long count, adjustment = -RWSEM_ACTIVE_WRITE_BIAS;
+ long
On Wed, Apr 30, 2014 at 09:33:34AM -0700, Jason Low wrote:
static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
{
struct task_struct *owner;
bool on_cpu = false;
Wouldn't we want to initialize on_cpu = true. For the !owner case, I
would expect that we
On Wed, 2014-04-30 at 10:27 +0200, Peter Zijlstra wrote:
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
+/*
+ * Try to acquire write lock before the writer has been put on wait queue.
+ */
+static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
+{
On Wed, Apr 30, 2014 at 10:50:09AM -0700, Tim Chen wrote:
On Wed, 2014-04-30 at 10:27 +0200, Peter Zijlstra wrote:
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
+/*
+ * Try to acquire write lock before the writer has been put on wait
queue.
+ */
+static inline
On Wed, 2014-04-30 at 20:04 +0200, Peter Zijlstra wrote:
On Wed, Apr 30, 2014 at 10:50:09AM -0700, Tim Chen wrote:
On Wed, 2014-04-30 at 10:27 +0200, Peter Zijlstra wrote:
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
+/*
+ * Try to acquire write lock before the
On Wed, 2014-04-30 at 11:08 -0700, Tim Chen wrote:
On Wed, 2014-04-30 at 20:04 +0200, Peter Zijlstra wrote:
On Wed, Apr 30, 2014 at 10:50:09AM -0700, Tim Chen wrote:
On Wed, 2014-04-30 at 10:27 +0200, Peter Zijlstra wrote:
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
On Wed, 2014-04-30 at 14:01 -0700, Tim Chen wrote:
On Wed, 2014-04-30 at 11:08 -0700, Tim Chen wrote:
On Wed, 2014-04-30 at 20:04 +0200, Peter Zijlstra wrote:
On Wed, Apr 30, 2014 at 10:50:09AM -0700, Tim Chen wrote:
On Wed, 2014-04-30 at 10:27 +0200, Peter Zijlstra wrote:
On Mon,
On Wed, 2014-04-30 at 14:06 -0700, Davidlohr Bueso wrote:
if (count == RWSEM_WAITING_BIAS)
old = cmpxchg(sem-count, count, count +
RWSEM_ACTIVE_BIAS);
else
old = cmpxchg(sem-count, count,
On Tue, 2014-04-29 at 08:11 -0700, Paul E. McKenney wrote:
> On Mon, Apr 28, 2014 at 05:50:49PM -0700, Tim Chen wrote:
> > On Mon, 2014-04-28 at 16:10 -0700, Paul E. McKenney wrote:
> >
> > > > +#ifdef CONFIG_SMP
> > > > +static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
> > >
On Mon, Apr 28, 2014 at 05:50:49PM -0700, Tim Chen wrote:
> On Mon, 2014-04-28 at 16:10 -0700, Paul E. McKenney wrote:
>
> > > +#ifdef CONFIG_SMP
> > > +static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
> > > +{
> > > + int retval;
> > > + struct task_struct *owner;
> > > +
> >
On Mon, Apr 28, 2014 at 05:50:49PM -0700, Tim Chen wrote:
On Mon, 2014-04-28 at 16:10 -0700, Paul E. McKenney wrote:
+#ifdef CONFIG_SMP
+static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
+{
+ int retval;
+ struct task_struct *owner;
+
+ rcu_read_lock();
On Tue, 2014-04-29 at 08:11 -0700, Paul E. McKenney wrote:
On Mon, Apr 28, 2014 at 05:50:49PM -0700, Tim Chen wrote:
On Mon, 2014-04-28 at 16:10 -0700, Paul E. McKenney wrote:
+#ifdef CONFIG_SMP
+static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
+{
+
On Mon, 2014-04-28 at 17:50 -0700, Tim Chen wrote:
> On Mon, 2014-04-28 at 16:10 -0700, Paul E. McKenney wrote:
>
> > > +#ifdef CONFIG_SMP
> > > +static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
> > > +{
> > > + int retval;
> > > + struct task_struct *owner;
> > > +
> > > +
On Mon, 2014-04-28 at 16:10 -0700, Paul E. McKenney wrote:
> > +#ifdef CONFIG_SMP
> > +static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
> > +{
> > + int retval;
> > + struct task_struct *owner;
> > +
> > + rcu_read_lock();
> > + owner = ACCESS_ONCE(sem->owner);
>
>
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
> We have reached the point where our mutexes are quite fine tuned
> for a number of situations. This includes the use of heuristics
> and optimistic spinning, based on MCS locking techniques.
>
> Exclusive ownership of read-write
We have reached the point where our mutexes are quite fine tuned
for a number of situations. This includes the use of heuristics
and optimistic spinning, based on MCS locking techniques.
Exclusive ownership of read-write semaphores are, conceptually,
just about the same as mutexes, making them
We have reached the point where our mutexes are quite fine tuned
for a number of situations. This includes the use of heuristics
and optimistic spinning, based on MCS locking techniques.
Exclusive ownership of read-write semaphores are, conceptually,
just about the same as mutexes, making them
On Mon, Apr 28, 2014 at 03:09:01PM -0700, Davidlohr Bueso wrote:
We have reached the point where our mutexes are quite fine tuned
for a number of situations. This includes the use of heuristics
and optimistic spinning, based on MCS locking techniques.
Exclusive ownership of read-write
On Mon, 2014-04-28 at 16:10 -0700, Paul E. McKenney wrote:
+#ifdef CONFIG_SMP
+static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
+{
+ int retval;
+ struct task_struct *owner;
+
+ rcu_read_lock();
+ owner = ACCESS_ONCE(sem-owner);
OK, I'll bite...
On Mon, 2014-04-28 at 17:50 -0700, Tim Chen wrote:
On Mon, 2014-04-28 at 16:10 -0700, Paul E. McKenney wrote:
+#ifdef CONFIG_SMP
+static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
+{
+ int retval;
+ struct task_struct *owner;
+
+ rcu_read_lock();
+
54 matches
Mail list logo