When stealing from a remote CPU all available tags are moved from the remote CPU's cache to the stealing CPU's one. Since the best CPU to steal from is not selected the victim might actively performing IO and (as result of losing all local tags) a further cycle of cache rebouncing and/or stealing could be provoked.
This update is an attempt to soften the described scenario and limit the number of tags to be stolen at once to the value of percpu_batch_size. Signed-off-by: Alexander Gordeev <agord...@redhat.com> Cc: Kent Overstreet <k...@daterainc.com> Cc: Peter Zijlstra <pet...@infradead.org> Cc: Jens Axboe <ax...@kernel.dk> Cc: "Nicholas A. Bellinger" <n...@linux-iscsi.org> --- lib/percpu_ida.c | 18 ++++++++++-------- 1 files changed, 10 insertions(+), 8 deletions(-) diff --git a/lib/percpu_ida.c b/lib/percpu_ida.c index fad029c..b4c4cc7 100644 --- a/lib/percpu_ida.c +++ b/lib/percpu_ida.c @@ -84,20 +84,22 @@ static inline void steal_tags(struct percpu_ida *pool, pool->cpu_last_stolen = cpu; remote = per_cpu_ptr(pool->tag_cpu, cpu); - cpumask_clear_cpu(cpu, &pool->cpus_have_tags); - - if (remote == tags) + if (remote == tags) { + cpumask_clear_cpu(cpu, &pool->cpus_have_tags); continue; + } spin_lock(&remote->lock); if (remote->nr_free) { - memcpy(tags->freelist, - remote->freelist, - sizeof(unsigned) * remote->nr_free); + const struct percpu_ida *p = pool; + + move_tags(tags->freelist, &tags->nr_free, + remote->freelist, &remote->nr_free, + min(remote->nr_free, p->percpu_batch_size)); - tags->nr_free = remote->nr_free; - remote->nr_free = 0; + if (!remote->nr_free) + cpumask_clear_cpu(cpu, &pool->cpus_have_tags); } spin_unlock(&remote->lock); -- 1.7.7.6 -- Regards, Alexander Gordeev agord...@redhat.com -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/