On Sun, 2012-07-29 at 01:13 -0400, Mikulas Patocka wrote:

> Each cpu should have its own rw semaphore in its cache, so I don't see a 
> problem there.
> 
> When you change block size, all 4096 rw semaphores are locked for write, 
> but changing block size is not a performance sensitive operation.
> 
> > Really you shouldnt use rwlock in a path if this might hurt performance.
> > 
> > RCU is probably a better answer.
> 
> RCU is meaningless here. RCU allows lockless dereference of a pointer. 
> Here the problem is not pointer dereference, the problem is that integer 
> bd_block_size may change.

So add a pointer if you need to. Thats the point.

> 
> > (bdev->bd_block_size should be read exactly once )
> 
> Rewrite all direct and non-direct io code so that it reads block size just 
> once ...


You introduced percpu rw semaphores, thats only incentive for people to
use that infrastructure elsewhere.

And its a big hammer :

sizeof(struct rw_semaphore)=0x70 

You can probably design something needing no more than 4 bytes per cpu,
and this thing could use non locked operations as bonus.

like the following ...

struct percpu_rw_semaphore {
        /* percpu_sem_down_read() use the following in fast path */
        unsigned int __percpu *active_counters;

        unsigned int __percpu *counters;
        struct rw_semaphore     sem; /* used in slow path and by writers */
};

static inline int percpu_sem_init(struct percpu_rw_semaphore *p)
{
        p->counters = alloc_percpu(unsigned int);
        if (!p->counters)
                return -ENOMEM;
        init_rwsem(&p->sem);
        p->active_counters = p->counters;
        return 0;
}


static inline bool percpu_sem_down_read(struct percpu_rw_semaphore *p)
{
        unsigned int __percpu *counters = ACCESS_ONCE(p->active_counters);

        if (counters) {
                this_cpu_inc(*counters);
                return true;
        }
        down_read(&p->sem);
        return false;
}

static inline void percpu_sem_up_read(struct percpu_rw_semaphore *p, bool 
fastpath)
{
        if (fastpath)
                this_cpu_dec(*p->counters);
        else
                up_read(&p->sem);
}

static inline unsigned int percpu_count(unsigned int *counters)
{
        unsigned int total = 0;
        int cpu;

        for_each_possible_cpu(cpu)
                total += *per_cpu_ptr(counters, cpu);

        return total;
}

static inline void percpu_sem_down_write(struct percpu_rw_semaphore *p)
{
        down_write(&p->sem);
        p->active_counters = NULL;

        while (percpu_count(p->counters))
                schedule();
}

static inline void percpu_sem_up_write(struct percpu_rw_semaphore *p)
{
        p->active_counters = p->counters;
        up_write(&p->sem);
}




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to