Introduce helper cap_ib_mcast() to help us check if the port of an
IB device support Infiniband Multicast.
Cc: Steve Wise
Cc: Tom Talpey
Cc: Jason Gunthorpe
Cc: Doug Ledford
Cc: Ira Weiny
Cc: Sean Hefty
Signed-off-by: Michael Wang
---
drivers/infiniband/core/cma.c | 6
Introduce helper cap_read_multi_sge() to help us check if the port of an
IB device support RDMA Read Multiple Scatter-Gather Entries.
Cc: Steve Wise
Cc: Tom Talpey
Cc: Jason Gunthorpe
Cc: Doug Ledford
Cc: Ira Weiny
Cc: Sean Hefty
Signed-off-by: Michael Wang
---
include/rdma/ib_verbs.h
Introduce helper cap_af_ib() to help us check if the port of an
IB device support Native Infiniband Address.
Cc: Steve Wise
Cc: Tom Talpey
Cc: Jason Gunthorpe
Cc: Doug Ledford
Cc: Ira Weiny
Cc: Sean Hefty
Signed-off-by: Michael Wang
---
drivers/infiniband/core/cma.c | 2 +-
include/rdma
Introduce helper cap_ipoib() to help us check if the port of an
IB device support IP over Infiniband.
Cc: Steve Wise
Cc: Tom Talpey
Cc: Jason Gunthorpe
Cc: Doug Ledford
Cc: Ira Weiny
Cc: Sean Hefty
Signed-off-by: Michael Wang
---
drivers/infiniband/ulp/ipoib/ipoib_main.c | 2 +-
include
Introduce helper cap_ib_cm_dev() to help us check if any port of
an IB device has the capability Infiniband Communication Manager.
Cc: Steve Wise
Cc: Tom Talpey
Cc: Jason Gunthorpe
Cc: Doug Ledford
Cc: Ira Weiny
Cc: Sean Hefty
Signed-off-by: Michael Wang
---
drivers/infiniband/core/cma.c
We have finished introducing the cap_XX(), and raw helper rdma_ib_or_iboe()
is no longer necessary, thus clean it up.
Cc: Steve Wise
Cc: Tom Talpey
Cc: Jason Gunthorpe
Cc: Doug Ledford
Cc: Ira Weiny
Cc: Sean Hefty
Signed-off-by: Michael Wang
---
include/rdma/ib_verbs.h
Introduce helper cap_eth_ah() to help us check if the port of an
IB device support Ethernet Address Handler.
Cc: Steve Wise
Cc: Tom Talpey
Cc: Jason Gunthorpe
Cc: Doug Ledford
Cc: Ira Weiny
Cc: Sean Hefty
Signed-off-by: Michael Wang
---
drivers/infiniband/core/cma.c | 6
We have get rid of all the scene using legacy rdma_node_get_transport(),
now clean it up.
Cc: Steve Wise
Cc: Tom Talpey
Cc: Jason Gunthorpe
Cc: Doug Ledford
Cc: Ira Weiny
Cc: Sean Hefty
Signed-off-by: Michael Wang
---
drivers/infiniband/core/verbs.c | 21 -
include
Introduce helper cap_ib_sa() to help us check if the port of an
IB device support Infiniband Subnet Administrator.
Cc: Steve Wise
Cc: Tom Talpey
Cc: Jason Gunthorpe
Cc: Doug Ledford
Cc: Ira Weiny
Cc: Sean Hefty
Signed-off-by: Michael Wang
---
drivers/infiniband/core/cma.c | 4
Introduce helper cap_iw_cm() to help us check if the port of an
IB device support IWARP Communication Manager.
Cc: Steve Wise
Cc: Tom Talpey
Cc: Jason Gunthorpe
Cc: Doug Ledford
Cc: Ira Weiny
Cc: Sean Hefty
Signed-off-by: Michael Wang
---
drivers/infiniband/core/cma.c | 17
On 07/12/2012 10:07 PM, Peter Zijlstra wrote:
> On Tue, 2012-07-03 at 14:34 +0800, Michael Wang wrote:
>> From: Michael Wang
>>
>> it's impossible to enter else branch if we have set skip_clock_update
>> in task_yield_fair(), as yield_to_task_fair() will dire
From: Michael Wang
This patch is trying to provide a way for user to dynamically change
the behaviour of load balance by setting flags of schedule domain.
Currently it's rely on cpu cgroup and only SD_LOAD_BALANCE was
implemented, usage:
1. /sys/fs/cgroup/domain/domain.config_level
Add the missing cc list.
On 07/16/2012 05:16 PM, Michael Wang wrote:
> From: Michael Wang
>
> This patch is trying to provide a way for user to dynamically change
> the behaviour of load balance by setting flags of schedule domain.
>
> Currently it's rely
From: Michael Wang
This patch set provide a way for user to dynamically configure the scheduler
domain flags, which usually to be static.
We can do the configuration through cpuset cgroup, new file will be found
under each hierarchy:
sched_smt_domain_flag
-- appear when
From: Michael Wang
Add the variables we need for the implementation of dynamical domain
flags.
Signed-off-by: Michael Wang
---
include/linux/sched.h | 22 ++
kernel/cpuset.c |7 +++
2 files changed, 29 insertions(+), 0 deletions(-)
diff --git a/include
From: Michael Wang
Add the functions and code which will do initialization for dynamical
domain flags.
Signed-off-by: Michael Wang
---
include/linux/sched.h | 10 --
kernel/cpuset.c |8 ++--
kernel/sched/core.c |2 +-
3 files changed, 15 insertions(+), 5
From: Michael Wang
We will record the domain flags for cpuset in update_domain_attr and
use it to replace the static domain flags in set_domain_attribute.
Signed-off-by: Michael Wang
---
kernel/cpuset.c |7 +++
kernel/sched/core.c | 10 +-
2 files changed, 16 insertions
From: Michael Wang
Add the fundamental functions which will help to record the status of
dynamical domain flags for cpuset.
Signed-off-by: Michael Wang
---
kernel/cpuset.c | 31 +++
1 files changed, 31 insertions(+), 0 deletions(-)
diff --git a/kernel/cpuset.c b
From: Michael Wang
Add the facility for user to configure the dynamical domain flags and
enable/disable it.
Signed-off-by: Michael Wang
---
kernel/cpuset.c | 85 +++
1 files changed, 85 insertions(+), 0 deletions(-)
diff --git a/kernel
is big enough to avoid the
warning info.
So is this the fix you mentioned? or someone has find out the true
reason and fixed it?
Regards,
Michael Wang
>
> regards,
> dan carpenter
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of
On 07/20/2012 03:00 PM, Mike Galbraith wrote:
> On Fri, 2012-07-20 at 11:09 +0800, Michael Wang wrote:
>> Hi, Mike, Martin, Dan
>>
>> I'm currently taking an eye on the rcu stall issue which was reported by
>> you in the mail:
>>
>> rcu: endless stal
On 07/20/2012 04:36 PM, Dan Carpenter wrote:
> On Fri, Jul 20, 2012 at 04:24:25PM +0800, Michael Wang wrote:
>> On 07/20/2012 02:41 PM, Dan Carpenter wrote:
>>> My bug was fixed in March. There was an email thread about it when
>>> the merge window opened but I
On 02/22/2014 12:43 AM, Sasha Levin wrote:
> On 02/19/2014 11:32 PM, Michael wang wrote:
>> On 02/20/2014 02:08 AM, Sasha Levin wrote:
>>> >Hi all,
>>> >
>>> >While fuzzing with trinity inside a KVM tools guest, running latest
>>> >-next ke
gt;> saw :)
>
> Nope, still see it with latest -tip.
>
> I ran tip's master branch, should I have tried a different one?
Hmm... I don't see the changes we expected on master either...
Peter, do we accidentally missed this commit?
http://git.kernel.org/tip/477af33
On 02/24/2014 03:10 PM, Peter Zijlstra wrote:
> On Mon, Feb 24, 2014 at 01:19:15PM +0800, Michael wang wrote:
>> Peter, do we accidentally missed this commit?
>>
>> http://git.kernel.org/tip/477af336ba06ef4c32e97892bb0d2027ce30f466
>
> Ingo dropped it on Saturday becaus
.
Thanks for the comment :)
It was a stuck inside pick_next_task_fair(), and we already got one
solution now ;-)
Regards,
Michael Wang
>
> Thanx, Paul
>
>> [] idle_balance+0x10f/0x1c0
>> [] pick_next_task_fair+0x11e
On 02/25/2014 02:21 AM, Sasha Levin wrote:
[snip]
>>
>> Fixes: 38033c37faab ("sched: Push down pre_schedule() and
>> idle_balance()")
>> Cc: Juri Lelli
>> Cc: Ingo Molnar
>> Cc: Steven Rostedt
>> Reported-by: Michael Wang
>> Signed-off-
seeing how we avoid dereferencing
> p->sched_class.
Great, it once appeared in my mind but you achieved this without new
parameter, now let's ignore my wondering above :)
Regards,
Michael Wang
>
> ---
> Subject: sched: Guarantee task priority in pick_next_task()
> From: Pet
likely(p & 1), I think most CPUs can encode
> that far better than the full pointer immediate.
Agree, unless odd-align stuff appeared...
Regards,
Michael Wang
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to maj
On 02/25/2014 06:49 PM, Peter Zijlstra wrote:
> On Tue, Feb 25, 2014 at 12:47:01PM +0800, Michael wang wrote:
>> On 02/24/2014 09:10 PM, Peter Zijlstra wrote:
>>> On Mon, Feb 24, 2014 at 01:12:18PM +0100, Peter Zijlstra wrote:
>>>> + if (p) {
>>
uld we recheck 'rq->nr_running == rq->cfs.h_nr_running' here before
goto pick fair entity to make sure the priority?
May be like:
if (idle_balance(rq) &&
rq->nr_running == rq->cfs.h_nr_running)
Regards,
Michael Wang
> + rq->idle_stamp = 0;
* the priority.
+*/
+ if (rq->nr_running == rq->cfs.h_nr_running || !need_resched())
+ goto again;
I like tea BTW, drink every day :)
Regards,
Michael Wang
> --
> To unsubscribe from this list: send the line "unsubscri
On 02/18/2014 07:22 PM, Peter Zijlstra wrote:
> On Tue, Feb 18, 2014 at 01:12:03PM +0800, Michael wang wrote:
>> Hi, Folks
>>
>> Got below panic while testing tip/master on x86 box, it randomly
>> occur while booting or rebooting, any ideas?
>
> The obvious pi
_SCHED
>> +se->depth = se->parent ? se->parent->depth + 1 : 0;
>> +#endif
>> +if (!se->on_rq)
>> return;
>>
>> /*
>
>
> Michael, do you think you can send a proper patch for this?
My pleasure :) will post it later
On 02/20/2014 02:10 AM, Sasha Levin wrote:
> On 02/17/2014 09:26 PM, Michael wang wrote:
>> On 02/17/2014 05:20 PM, Peter Zijlstra wrote:
>> [snip]
>>>> >> static void switched_to_fair(struct rq *rq, struct task_struct *p)
>>>> >> {
>>&g
task is FAIR.
CC: Ingo Molnar
CC: Peter Zijlstra
Reported-by: Sasha Levin
Tested-by: Sasha Levin
Signed-off-by: Michael Wang
---
kernel/sched/fair.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 235cfa7..280da89
e problem, I suggest we do some retest after these
patch got merged.
Regards,
Michael Wang
>
> The initial spew is:
>
> [ 293.110057] BUG: soft lockup - CPU#8 stuck for 22s! [migration/8:258]
> [ 293.110057] Modules linked in:
> [ 293.110057] irq event stamp: 20828
> [ 293.
On 01/22/2014 08:36 PM, Peter Zijlstra wrote:
> On Wed, Jan 22, 2014 at 04:27:45PM +0800, Michael wang wrote:
>> # CONFIG_PREEMPT_NONE is not set
>> CONFIG_PREEMPT_VOLUNTARY=y
>> # CONFIG_PREEMPT is not set
>
> Could you try the patch here:
>
> lkml
CC: Paul Mackerras
CC: Nathan Fontenot
CC: Stephen Rothwell
CC: Andrew Morton
CC: Robert Jennings
CC: Jesse Larrew
CC: "Srivatsa S. Bhat"
CC: Alistair Popple
Signed-off-by: Michael Wang
---
arch/powerpc/mm/numa.c |9 +
1 file changed, 9 insertions(+)
diff --git a
we won't continue the updating,
and empty updates[] was confirmed to show up inside
arch_update_cpu_topology().
What I can't make sure is whether this is legal, notify changes but no
changes happen sounds weird...however, even if it's legal, a check in
here still make sense IMHO.
Regards,
Michael Wa
Hi, Srivatsa
It's nice to have you confirmed the fix, and thanks for the well-writing
comments, will apply them and send out the new patch later :)
Regards,
Michael Wang
On 04/07/2014 06:15 PM, Srivatsa S. Bhat wrote:
> Hi Michael,
>
> On 04/02/2014 08:59 AM, Michael wang wrote:
CC: "Srivatsa S. Bhat"
CC: Alistair Popple
Suggested-by: "Srivatsa S. Bhat"
Signed-off-by: Michael Wang
---
arch/powerpc/mm/numa.c | 15 +++
1 file changed, 15 insertions(+)
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 3
if (!min_load) {
> + struct tick_sched *ts = &per_cpu(tick_cpu_sched, i);
> +
> + s64 latest_wake = 0;
I guess we missed some code for latest_wake here?
Regards,
Michael Wang
> + /* idle cpu doing irq */
> +
elaxing with several cpu idle)
Regards,
Michael Wang
>
> Signed-off-by: Alex Shi
> ---
> kernel/sched/fair.c | 20
> 1 file changed, 20 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index c7395d9..fb52d26 100644
>
gher chance for
BINGO than just check 'tick_stopped'...
BTW, may be the logical should be in the select_idle_sibling()?
Regards,
Michael Wang
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
I
>
> Add the syscalls needed for supporting scheduling algorithms
> with extended scheduling parameters (e.g., SCHED_DEADLINE).
Will this do any helps?
Regards,
Michael Wang
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0326c06..bf4a6ed 100644
--- a/kernel/sched/core.
-321,6 +320,8 @@ static int __ref _cpu_down(unsigned int cpu, int
tasks_frozen)
#endif
synchronize_rcu();
+ smpboot_park_threads(cpu);
+
/*
* So now all preempt/rcu users must observe !cpu_active().
*/
Regards,
Michael Wang
>
> commit 6acce3ef8
On 11/12/2013 05:55 PM, Fengguang Wu wrote:
[snip]
>>
>> Good thinking.. Wu did this cure stuff?
Thanks for the confirm :)
>
> Yes, it fixed the problem.
Thanks for the testing :)
>
> Tested-by: Fengguang Wu
>
Will send out a formal patch later.
Regards,
Mich
Molnar
Reported-by: Fengguang Wu
Tested-by: Fengguang Wu
Signed-off-by: Michael Wang
---
kernel/cpu.c |5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 63aa50d..2227b58 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -306,7 +306,6
check necessary here? if rq get more tasks during
the balance, enqueue_task() should already do the check each time when
we move_task(), isn't it?
Regards,
Michael Wang
> power_late_callback(this_cpu);
> }
>
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
&
As the comment said, we want a node benefit BOTH task and group, thus the
condition to skip the node should be:
taskimp < 0 || groupimp < 0
CC: Mel Gorman
CC: Ingo Molnar
CC: Peter Zijlstra
Signed-off-by: Michael Wang
---
kernel/sched/fair.c |2 +-
1 file changed, 1 ins
3 sg1:cpu4,5,6,7
MC sg0:cpu0,1 sg1:cpu2,3
SMT sg0:cpu0sg1:cpu1
If one domain only have one group, that's sounds really a weird topology...
Regards,
Michael Wang
>>
>> -Mike
>>
>>
>
>
--
To unsubscribe from this list: send the li
like won't happen...
if 'diff' is negative, it's absolute value won't bigger than '*avg', not
to mention we only use 1/8 of it.
Regards,
Michael Wang
> }
> #endif
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel&q
one difference when group get deeper is the tasks of
that group become to gathered on CPU more often, some time all the
dbench instances was running on the same CPU, this won't happen for l1
group, may could explain why dbench could not get CPU more than 100% any
more.
But why the gather happen
Hey, Mike :)
On 05/16/2014 10:51 AM, Mike Galbraith wrote:
> On Fri, 2014-05-16 at 10:23 +0800, Michael wang wrote:
>
>> But we found that one difference when group get deeper is the tasks of
>> that group become to gathered on CPU more often, some time all the
>> dbench
more or less work and does indeed suggest there's
> something iffy.
Yeah, sane group topology also issued... besides the sleeper bonus, it
seems like the root cause is tasks starting to gather, I plan to check
the difference on task load between two cases, see if there is a good
way to sol
On 02/13/2014 11:34 AM, Michael wang wrote:
> On 02/12/2014 06:22 PM, Peter Zijlstra wrote:
> [snip]
>>
>> Yes I think there might be a problem here because of how we re-arranged
>> things. Let me brew of pot of tea and try to actually wake up.
>>
>> I susp
utilize the resched-flag for the case when RT/DL task was enqueued but don't ask
for resched (will that ever happened?).
CC: Ingo Molnar
Suggested-by: Peter Zijlstra
Signed-off-by: Michael Wang
---
kernel/sched/fair.c | 23 ++-
1 file changed, 22 insertions(+), 1 del
goto got_task;
Since idle_balance() won't happen in the loop, may be we could use:
if p && p->sched_class == class
return p
in here, let it fall down into the loop if p is idle, since that means
we got RT/DL and will do this anyway,
depth, and that lead to a wrong depth after switched
back to FAIR...
Regards,
Michael Wang
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 235cfa7..4445e56 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7317,7 +7317,11 @@ static void switched_from_fair(struct rq *r
gt;depth = se->parent ? se->parent->depth + 1 : 0;
>> +#endif
>> +if (!se->on_rq)
>> return;
>>
>> /*
>
> Yes indeed. My first idea yesterday was to put it in set_task_rq() to be
> absolutely sure we catch all; but if this is
issue, I'll mail
> about it soon.
Thanks for that, looking forward the results :)
Regards,
Michael Wang
>
>
> Thanks,
> Sasha
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kern
s idle, since that means
>> we got RT/DL and will do this anyway, could save two jump work may be?
>> (and may could combine some code below if so?)
>
> Maybe; we'd have to look at whatever GCC does with it.
Exactly, alien code appear when in binary...
Regards,
Michael Wang
(== 40) tasks.|Running with 1*40 (== 40) tasks.
Time: 1.157 |Time: 0.998
BTW, I got panic while rebooting, but should not caused by
this patch set, will recheck and post the report later.
Regards,
Michael Wang
INFO: rcu_sched detected stalls on CPUs/tasks: { 7} (detected by 1, t=2
On 02/18/2014 02:03 PM, Alex Shi wrote:
[snip]
>>
>
> I reviewed my patch again. Also didn't find suspicious line for the
> following rcu stall. Will wait for your report. :)
Posted, it will be triggered in pure tip/master, your patch set was
innocent ;-)
Regards,
Michae
On 05/13/2014 05:47 PM, Peter Zijlstra wrote:
> On Tue, May 13, 2014 at 11:34:43AM +0800, Michael wang wrote:
>> During our testing, we found that the cpu.shares doesn't work as
>> expected, the testing is:
>>
>
> /me zaps all the kvm nonsense as that's non re
s time waiting for locks. That waiting may interfere with getting
> as much CPU as it wants.
That's what we are thinking, also we assume that by introducing load
decay mechanism, it become harder for the sleepy tasks to gain enough
slice, well, that currently just imagination, more i
.org/lkml/2012/6/18/212
That's what we need, may be a little reform to enable multi-threads, or
may be add some locks... anyway, will redo the test and see what we
could found :)
Regards,
Michael Wang
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kerne
6
Now it seems more like a generic problem... will keep investigating, please
let me know if there are any suggestions :)
Regards,
Michael Wang
#include
#include
#include
#include
pthread_mutex_t my_mutex;
unsigned long long stamp(void)
{
struct timeval tv;
gettimeo
milar like the kernel one's behaviour, then
it may not going to sleep when it's the only one running on CPU.
Oh, I think we got the reason here, when there are other task running,
mutex will going to sleep and the %CPU dropped to serialized case that is
around 100%.
But for the dbench,
On 05/15/2014 04:35 PM, Peter Zijlstra wrote:
> On Thu, May 15, 2014 at 11:46:06AM +0800, Michael wang wrote:
>> But for the dbench, stress combination, that's not spin-wasted, dbench
>> throughput do dropped, how could we explain that one?
>
> I've no clue what dbe
er light load */
That is trying to solve the load overflow issue, correct?
I'm not sure which account will turns to be huge when group get deeper,
the load accumulation will suffer discount when passing up, isn't it?
Anyway, will give it a try and see what happened :)
Regards,
issue to be fixed?
Any comments are welcomed :)
Regards,
Michael Wang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read t
ng inside
idle_balance(), including Kirill who is the designer ;-)
Regards,
Michael Wang
>
> Stack trace is similar to before:
>
> [ 6004.990292] CPU: 20 PID: 26054 Comm: trinity-c58 Not tainted
> 3.14.0-next-20140409-sasha-00022-g984f7c5-dirty #385
> [ 6004.990292] task: 8
On 04/08/2014 11:19 AM, Michael wang wrote:
> Since v1:
> Edited the comment according to Srivatsa's suggestion.
>
> During the testing, we encounter below WARN followed by Oops:
Is there any more comments on this issue? Should we apply this fix?
Regards,
Michael Wang
&
), IMHO this seems like not such a good idea... what
we gain doesn't worth the overhead.
But if we have testing show this modify could benefit most of the
workloads (I don't think so but who knows...), then we'll have the
reason to add some load comparison logical inside that quick pat
On 06/30/2014 05:27 PM, Mike Galbraith wrote:
> On Mon, 2014-06-30 at 16:47 +0800, Michael wang wrote:
[snip]
>>> While you're getting rid of the concept of 'GENTLE_FAIR_SLEEPERS', don't
>>> forget to also get rid of the concept of 'over-scheduling
On 07/01/2014 01:41 PM, Mike Galbraith wrote:
> On Tue, 2014-07-01 at 10:57 +0800, Michael wang wrote:
>
>> IMHO, currently the generic scheduler just try to take care both latency
>> and throughput, both will take a little damage but won't be damaged too
>> much,
methods to
address that?
Regards,
Michael Wang
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On 07/01/2014 04:56 PM, Peter Zijlstra wrote:
> On Tue, Jul 01, 2014 at 04:38:58PM +0800, Michael wang wrote:
[snip]
>> Currently when dbench running with stress, it could only gain one CPU,
>> and cpu-cgroup cpu.shares is meaningless, is there any good methods to
>> address
On 07/02/2014 08:49 PM, Peter Zijlstra wrote:
> On Wed, Jul 02, 2014 at 10:47:34AM +0800, Michael wang wrote:
>> The opinion on features actually make me a little confusing... I used to
>> think the scheduler is willing on providing kinds of way to adapt itself
>> to differen
On 07/02/2014 10:47 PM, Rik van Riel wrote:
> On 07/01/2014 04:38 AM, Michael wang wrote:
>> On 07/01/2014 04:20 PM, Peter Zijlstra wrote:
>> [snip]
>>>>
>>>> Just wondering could we make this another scheduler feature?
>>>
>>> No
On 06/18/2014 12:50 PM, Michael wang wrote:
> By testing we found that after put benchmark (dbench) in to deep cpu-group,
> tasks (dbench routines) start to gathered on one CPU, which lead to that the
> benchmark could only get around 100% CPU whatever how big it's task-group's
hares will lead to 3:4:4 on CPU%, also the throughput of
dbench raised, so we finally got the way to help dbench(transaction workload)
to fight with stress(CPU-intensive workload).
CC: Ingo Molnar
CC: Peter Zijlstra
Signed-off-by: Michael Wang
---
kernel/sch
Hi, Mike :)
On 06/30/2014 04:06 PM, Mike Galbraith wrote:
> On Mon, 2014-06-30 at 15:36 +0800, Michael wang wrote:
>> On 06/18/2014 12:50 PM, Michael wang wrote:
>>> By testing we found that after put benchmark (dbench) in to deep cpu-group,
>>> tasks (dbench routine
select_idle_sibling(), the only
difference is now we balance tasks inside the group to prevent them from
gathered.
Below patch has solved the problem during the testing, I'd like to do more
testing on other benchmarks before send out the formal patch, any comments
are welcomed ;-)
Regards
tive to queueing, and select_idle_siblings()
> avoids a lot of queueing on an idle system. I don't think that's
> something we should fix with cgroups.
It has to queue anyway after wakeup, isn't it? we just want a good
candidate which won't make things too bad inside group, and only do this
when select_idle_siblings() give up on searching...
Regards,
Michael Wang
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
h (istr) 12 cpus the
> avg cpu load would be 3072/12 ~ 256, and 170 is significant on that
> scale.
>
> Same with l2, total weight of 1024, giving a per task weight of ~56 and
> a per-cpu weight of ~85, which is again significant.
We have other tasks which has to running i
other, they may gathering.
Please let me know if you have any questions on whatever the issue or the fix,
comments are welcomed ;-)
CC: Ingo Molnar
CC: Peter Zijlstra
Signed-off-by: Michael Wang
---
kernel/sched/fair.c | 81 +++
1 file ch
group's shares is, and we consider that cpu-group was broken in this
cases...
I agree that this is not a generic requirement and scheduler should only
be responsible for general situation, but since it's really a too big
regression, could we at least provide some way to stop the damage? After
a feature like:
SCHED_FEAT(TG_INTERNAL_BALANCE, false)
I do believe there are more cases could benefit from it, for those who
don't want too many wake-affine and want group-tasks more balanced on
each CPU, scheduler could provide this as an option then, shall we?
Regards,
Michael Wang
&g
nst struct
cpumask *in_mask)
p = find_process_by_pid(pid);
if (!p) {
rcu_read_unlock();
- put_online_cpus();
return -ESRCH;
}
Regards,
Michael Wang
>
> I got the below dmesg and the first bad commit is
&g
On 10/23/2013 04:46 AM, Peter Zijlstra wrote:
> On Mon, Oct 21, 2013 at 11:28:30AM +0800, Michael wang wrote:
>> Hi, Fengguang
>>
>> On 10/19/2013 08:51 AM, Fengguang Wu wrote:
>>> Greetings,
>>
>> Will this do any helps?
>>
>> diff --gi
t_online_cpus();
>> return -ESRCH;
>
> Yes, it fixed the WARNING.
>
> Tested-by: Fengguang Wu
Thanks for the testing :)
>
> // The tests was queued for Michael Wang and have just finished.
>
> There seems show up a new unreliable error "BUG:ker
]
CC: Ingo Molnar
CC: Peter Zijlstra
Reported-by: Fengguang Wu
Tested-by: Fengguang Wu
Signed-off-by: Michael Wang
---
kernel/sched/core.c |1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c06b8d3..7c61f31 100644
--- a/kernel/sched/core.c
_cpus();
>> return -ESRCH;
>> }
>
> The patch is whitespace damaged.
Forgive me for the silly mistake... the line may be cursed... will
recheck and send out the right format, thanks for the notify :)
Regards,
Michael Wang
>
> Thanks,
>
>
]---
[ 58.757521] [ cut here ]
CC: Ingo Molnar
CC: Peter Zijlstra
Reported-by: Fengguang Wu
Tested-by: Fengguang Wu
Signed-off-by: Michael Wang
---
kernel/sched/core.c |1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c06b8d3
Hi, folks
I'll change mail address soon and will use 'wangyun2...@163.com'
temporarily.
Regards,
Michael Wang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://
found in the GID table, but such connections
> would fail later on when creating a QP, right?
Me too think this need a reconsider, to me the current logical don't
really care the missing gid in cache when initializing AV, I'm not
sure if it's necessary to fail all the following path for
On 12/15/2015 06:30 PM, Jason Gunthorpe wrote:
> On Tue, Dec 15, 2015 at 05:38:34PM +0100, Michael Wang wrote:
>> The hop_limit is only suggest that the package allowed to be
>> routed, not have to, correct?
>
> If the hop limit is >= 2 (?) then the GRH is mandatory. T
601 - 700 of 812 matches
Mail list logo