* Alex Shi <lkml.a...@gmail.com> wrote:

> >
> > Those of you who would like to test all the latest patches are
> > welcome to pick up latest bits at tip:master:
> >
> >    git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master
> >
> 
> I am wondering if it is a problem, but it still exists on HEAD: c418de93e39891
> http://article.gmane.org/gmane.linux.kernel.mm/90131/match=compiled+with+name+pl+and+start+it+on+my
> 
> like when just start 4 pl tasks, often 3 were running on node 
> 0, and 1 was running on node 1. The old balance will average 
> assign tasks to different node, different core.

This is "normal" in the sense that the current mainline 
scheduler is (supposed to be) doing something similar: if the 
node is still within capacity, then there's no reason to move 
those threads.

OTOH, I think with NUMA balancing we indeed want to spread them 
better, if those tasks do not share memory with each other but 
use their own memory. If they share memory then they should 
remain on the same node if possible.

Thanks,

        Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to