On 07/13/2012 10:45 AM, Don Morris wrote:
IIRC the test consisted of a 16GB NUMA system with two 8GB nodes.
It was running 3 KVM guests, two guests of 3GB memory each, and
one guest of 6GB each.
How many cpus per guest (host threads) and how many physical/logical
cpus per node on the host?
On 07/13/2012 10:45 AM, Don Morris wrote:
IIRC the test consisted of a 16GB NUMA system with two 8GB nodes.
It was running 3 KVM guests, two guests of 3GB memory each, and
one guest of 6GB each.
How many cpus per guest (host threads) and how many physical/logical
cpus per node on the host?
On 07/12/2012 03:02 PM, Rik van Riel wrote:
> On 03/16/2012 10:40 AM, Peter Zijlstra wrote:
>
> At LSF/MM, there was a presentation comparing Peter's
> NUMA code with Andrea's NUMA code. I believe this is
> the main reason why Andrea's code performed better in
> that particular test...
>
>> +
On 07/12/2012 03:02 PM, Rik van Riel wrote:
On 03/16/2012 10:40 AM, Peter Zijlstra wrote:
At LSF/MM, there was a presentation comparing Peter's
NUMA code with Andrea's NUMA code. I believe this is
the main reason why Andrea's code performed better in
that particular test...
+if
On 03/16/2012 10:40 AM, Peter Zijlstra wrote:
At LSF/MM, there was a presentation comparing Peter's
NUMA code with Andrea's NUMA code. I believe this is
the main reason why Andrea's code performed better in
that particular test...
+ if (sched_feat(NUMA_BALANCE_FILTER)) {
+
On 03/16/2012 10:40 AM, Peter Zijlstra wrote:
At LSF/MM, there was a presentation comparing Peter's
NUMA code with Andrea's NUMA code. I believe this is
the main reason why Andrea's code performed better in
that particular test...
+ if (sched_feat(NUMA_BALANCE_FILTER)) {
+
On 07/09/2012 08:25 AM, Peter Zijlstra wrote:
On Sun, 2012-07-08 at 14:35 -0400, Rik van Riel wrote:
This looks like something that should be fixed before the
code is submitted for merging upstream.
static bool __task_can_migrate(struct task_struct *t, u64 *runtime, int node)
{
...
is what
On 07/09/2012 08:40 AM, Peter Zijlstra wrote:
On Mon, 2012-07-09 at 14:23 +0200, Peter Zijlstra wrote:
It is not yet clear to me how and why your code converges.
I don't think it does.. but since the scheduler interaction is fairly
weak it doesn't matter too much from that pov.
Fair enough.
On Mon, 2012-07-09 at 14:23 +0200, Peter Zijlstra wrote:
> > It is not yet clear to me how and why your code converges.
>
> I don't think it does.. but since the scheduler interaction is fairly
> weak it doesn't matter too much from that pov.
>
That is,.. it slowly moves along with the cpu
On Sun, 2012-07-08 at 14:35 -0400, Rik van Riel wrote:
>
> This looks like something that should be fixed before the
> code is submitted for merging upstream.
static bool __task_can_migrate(struct task_struct *t, u64 *runtime, int node)
{
#ifdef CONFIG_CPUSETS
if (!node_isset(node,
On Sat, 2012-07-07 at 14:26 -0400, Rik van Riel wrote:
>
> You asked how and why Andrea's algorithm converges.
> After looking at both patch sets for a while, and asking
> for clarification, I think I can see how his code converges.
Do share.. what does it balance on and where does it converge
On Sat, 2012-07-07 at 14:26 -0400, Rik van Riel wrote:
> > +/*
> > + * Assumes symmetric NUMA -- that is, each node is of equal size.
> > + */
> > +static void set_max_mem_load(unsigned long load)
> > +{
> > + unsigned long old_load;
> > +
> > + spin_lock(_mem_load.lock);
> > +
On Sat, 2012-07-07 at 14:26 -0400, Rik van Riel wrote:
+/*
+ * Assumes symmetric NUMA -- that is, each node is of equal size.
+ */
+static void set_max_mem_load(unsigned long load)
+{
+ unsigned long old_load;
+
+ spin_lock(max_mem_load.lock);
+ old_load =
On Sat, 2012-07-07 at 14:26 -0400, Rik van Riel wrote:
You asked how and why Andrea's algorithm converges.
After looking at both patch sets for a while, and asking
for clarification, I think I can see how his code converges.
Do share.. what does it balance on and where does it converge to?
On Sun, 2012-07-08 at 14:35 -0400, Rik van Riel wrote:
This looks like something that should be fixed before the
code is submitted for merging upstream.
static bool __task_can_migrate(struct task_struct *t, u64 *runtime, int node)
{
#ifdef CONFIG_CPUSETS
if (!node_isset(node,
On Mon, 2012-07-09 at 14:23 +0200, Peter Zijlstra wrote:
It is not yet clear to me how and why your code converges.
I don't think it does.. but since the scheduler interaction is fairly
weak it doesn't matter too much from that pov.
That is,.. it slowly moves along with the cpu usage, only
On 07/09/2012 08:40 AM, Peter Zijlstra wrote:
On Mon, 2012-07-09 at 14:23 +0200, Peter Zijlstra wrote:
It is not yet clear to me how and why your code converges.
I don't think it does.. but since the scheduler interaction is fairly
weak it doesn't matter too much from that pov.
Fair enough.
On 07/09/2012 08:25 AM, Peter Zijlstra wrote:
On Sun, 2012-07-08 at 14:35 -0400, Rik van Riel wrote:
This looks like something that should be fixed before the
code is submitted for merging upstream.
static bool __task_can_migrate(struct task_struct *t, u64 *runtime, int node)
{
...
is what
On 03/16/2012 10:40 AM, Peter Zijlstra wrote:
+static bool can_move_ne(struct numa_entity *ne)
+{
+ /*
+* XXX: consider mems_allowed, stinking cpusets has mems_allowed
+* per task and it can actually differ over a whole process, la-la-la.
+*/
+ return true;
+}
On 03/16/2012 10:40 AM, Peter Zijlstra wrote:
+static bool can_move_ne(struct numa_entity *ne)
+{
+ /*
+* XXX: consider mems_allowed, stinking cpusets has mems_allowed
+* per task and it can actually differ over a whole process, la-la-la.
+*/
+ return true;
+}
On 03/16/2012 10:40 AM, Peter Zijlstra wrote:
+/*
+ * Assumes symmetric NUMA -- that is, each node is of equal size.
+ */
+static void set_max_mem_load(unsigned long load)
+{
+ unsigned long old_load;
+
+ spin_lock(_mem_load.lock);
+ old_load = max_mem_load.load;
+ if
On 03/16/2012 10:40 AM, Peter Zijlstra wrote:
+/*
+ * Assumes symmetric NUMA -- that is, each node is of equal size.
+ */
+static void set_max_mem_load(unsigned long load)
+{
+ unsigned long old_load;
+
+ spin_lock(max_mem_load.lock);
+ old_load = max_mem_load.load;
+ if
22 matches
Mail list logo