On 09/18, Peter Zijlstra wrote:
>
> On Fri, Sep 18, 2020 at 12:01:12PM +0200, pet...@infradead.org wrote:
> > +   u64 sum = per_cpu_sum(*(u64 *)sem->read_count);
>
> Moo, that doesn't work, we have to do two separate sums.

Or we can re-introduce "atomic_t slow_read_ctr".

        percpu_up_read_irqsafe(sem)
        {
                preempt_disable();
                atomic_dec_release(&sem->slow_read_ctr);
                if (!rcu_sync_is_idle(&sem->rss))
                        rcuwait_wake_up(&sem->writer);
                preempt_enable();
        }

        readers_active_check(sem)
        {
                unsigned int sum = per_cpu_sum(*sem->read_count) +
                        (unsigned int)atomic_read(&sem->slow_read_ctr);
                if (sum)
                        return false;
                ...
        }

Of course, this assumes that atomic_t->counter underflows "correctly", just
like "unsigned int".

But again, do we really want this?

Oleg.

Reply via email to