Re: Why the niceness is not always taken into account ?

2013-04-19 Thread Alexandre Laurent
 

Ok :) 

Thank you very much :) 

Le 18.04.2013 19:30,
valdis.kletni...@vt.edu a écrit : 

> On Thu, 18 Apr 2013 17:56:58
+0200, Alexandre Laurent said:
> 
>> My question was more like : is
there a way (like giving hint) to ask the autogroup system to group two
SSH sesssions in order to get nice behaving as expected without
disabling the whole autogroup system.
> 
> Sure. Launch both SSH'es so
they have the same control terminal.
> 
> (And yes, that does get
problematic, trying to run two ssh'es in the
> same xterm/whatever :)
>

> SCHED_AUTOGROUP is *not*, repeat *NOT* very flexible. It implements
one
> policy that happens to be very simple to code and yet work well
for a lot
> of use cases. You want something different, you can't use
AUTOGROUP for it.
 ___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why the niceness is not always taken into account ?

2013-04-18 Thread Valdis . Kletnieks
On Thu, 18 Apr 2013 17:56:58 +0200, Alexandre Laurent said:

> My question was more like : is there a way (like giving hint) to ask 
> the autogroup system to group two SSH sesssions in order to get nice 
> behaving as expected without disabling the whole autogroup system.

Sure.  Launch both SSH'es so they have the same control terminal.

(And yes, that does get problematic, trying to run two ssh'es in the
same xterm/whatever :)

SCHED_AUTOGROUP is *not*, repeat *NOT* very flexible.  It implements one
policy that happens to be very simple to code and yet work well for a lot
of use cases.  You want something different, you can't use AUTOGROUP for it.


pgpIipfq5vP1t.pgp
Description: PGP signature
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why the niceness is not always taken into account ?

2013-04-18 Thread Alexandre Laurent
I do not want to touch the kernel at all. I was giving the information 
about cgroups since it was one of the question asked before. I 
understood that the SCHED_AUTOGROUP is not using at all cgroups.

My question was more like : is there a way (like giving hint) to ask 
the autogroup system to group two SSH sesssions in order to get nice 
behaving as expected without disabling the whole autogroup system.

Le 18.04.2013 17:32, valdis.kletni...@vt.edu a écrit :

> On Thu, 18 Apr 2013 16:57:41 +0200, Alexandre Laurent said:
>
>> Note : the cgroups are not mounted at all.
>
> The cgroups filesystem doesn't have to be mounted for that - the 
> kernel
> handles that internally.
>
>> I still have a little question about it : Is it possible to force 
>> the
>> grouping of specific tasks ? (Which could be better than just 
>> disabling
>> the feature)
>
> At that point, you're better off mounting the cgroups filesystem and
> using
> something like systemd to put tasks into cgroups and control them. 
> It's
> a Really Bad Idea to try to handle that in-kernel. SCHED_AUTOGROUP 
> relies
> on the fact that many heavy-load processes are launched from xterms, 
> so
> grouping "everything in each xterm" into a separate group and then 
> one
> group
> for everything launched from the desktop works pretty well. and is 
> really
> bog-simple to code. Trying to do anything more complicated in-kernel 
> will
> be a mess, because nobody agrees on a policy that should be used 
> (other
> than
> the one used by AUTOGROUP)

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why the niceness is not always taken into account ?

2013-04-18 Thread Valdis . Kletnieks
On Thu, 18 Apr 2013 16:57:41 +0200, Alexandre Laurent said:

> Note : the cgroups are not mounted at all.

The cgroups filesystem doesn't have to be mounted for that - the kernel
handles that internally.

> I still have a little question about it :
> Is it possible to force the grouping of specific tasks ?
> (Which could be better than just disabling the feature)

At that point, you're better off mounting the cgroups filesystem and using
something like systemd to put tasks into cgroups and control them.  It's
a Really Bad Idea to try to handle that in-kernel.  SCHED_AUTOGROUP relies
on the fact that many heavy-load processes are launched from xterms, so
grouping "everything in each xterm" into a separate group and then one group
for everything launched from the desktop works pretty well. and is really
bog-simple to code.  Trying to do anything more complicated in-kernel will
be a mess, because nobody agrees on a policy that should be used (other than
the one used by AUTOGROUP) 



pgpVhMg6af82F.pgp
Description: PGP signature
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why the niceness is not always taken into account ?

2013-04-18 Thread Alexandre Laurent
Hello,

Disabling SCHED_AUTOGROUP (by using the flag 
kernel.sched_autogroup_enabled
with sysctl) did work very well. Thank you a lot.

This is understandable, when reading the following description :
"
This option optimizes the scheduler for common desktop workloads by 
automatically creating and populating task groups. This separation of 
workloads isolates aggressive CPU burners (like build jobs) from desktop 
applications. Task group autogeneration is currently based upon task 
session.
"

Note : the cgroups are not mounted at all.

I still have a little question about it :
Is it possible to force the grouping of specific tasks ?
(Which could be better than just disabling the feature)

Best regards,

Le 16.04.2013 19:59, Kristof Provost a écrit :

> On 2013-04-16 17:38:50 (+0200), Alexandre Laurent
>  wrote:
>
>> On the computer where I am testing, I have nothing related to 
>> cgroups.
>> Here a 'ps aux' in case I am missing something.
>
> cgroups wouldn't actually show up in the process list. Check mount to
> see if anyone mounts an fs of type 'cgroup'.
>
> It's perhaps even more likely that it's related to SCHED_AUTOGROUP as
> Michi suggested.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why the niceness is not always taken into account ?

2013-04-16 Thread Kristof Provost
On 2013-04-16 17:38:50 (+0200), Alexandre Laurent  
wrote:
> On the computer where I am testing, I have nothing related to cgroups.
> 
> Here a 'ps aux' in case I am missing something.

cgroups wouldn't actually show up in the process list. Check mount to
see if anyone mounts an fs of type 'cgroup'.

It's perhaps even more likely that it's related to SCHED_AUTOGROUP as
Michi suggested.

-- 
Kristof

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why the niceness is not always taken into account ?

2013-04-16 Thread michi1
Hi!

On 10:35 Tue 16 Apr , Alexandre Laurent wrote:
...
> I am running the same test, but connecting twice on the remote machine
> (one connection by test instance). I am using exactly the same commands
> than during the others experiments. But, by using two SSH instances, 
> the
> niceness will not be taken into account. The CPU will be shared equally
> between both instances even if htop is showing a niceness of 19 / -20
> for the low priority program and the privileged program respectively.

Can you check whether you have CONFIG_SCHED_AUTOGROUP enabled? If it is
enabled, try running the test again with this option turned off.

-Michi
-- 
programing a layer 3+4 network protocol for mesh networks
see http://michaelblizek.twilightparadox.com

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why the niceness is not always taken into account ?

2013-04-16 Thread Alexandre Laurent
 

On the computer where I am testing, I have nothing related to
cgroups.

Here a 'ps aux' in case I am missing something.

USER PID %CPU
%MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 10652 836 ? Ss
avril09 0:03 init [2] 
root 2 0.0 0.0 0 0 ? S avril09 0:00
[kthreadd]
root 3 0.0 0.0 0 0 ? S avril09 0:07 [ksoftirqd/0]
root 5 0.0
0.0 0 0 ? S< avril09 0:00 [kworker/0:0H]
root 7 0.0 0.0 0 0 ? S< avril09
0:00 [kworker/u:0H]
root 8 0.0 0.0 0 0 ? S avril09 0:00
[migration/0]
root 9 0.0 0.0 0 0 ? S avril09 0:00 [rcu_bh]
root 10 0.0
0.0 0 0 ? S avril09 1:05 [rcu_sched]
root 11 0.0 0.0 0 0 ? S avril09
0:01 [watchdog/0]
root 12 0.0 0.0 0 0 ? S avril09 0:01 [watchdog/1]
root
13 0.0 0.0 0 0 ? S avril09 0:15 [ksoftirqd/1]
root 14 0.0 0.0 0 0 ? S
avril09 0:00 [migration/1]
root 16 0.0 0.0 0 0 ? S< avril09 0:00
[kworker/1:0H]
root 17 0.0 0.0 0 0 ? S avril09 0:01 [watchdog/2]
root 18
0.0 0.0 0 0 ? S avril09 0:17 [ksoftirqd/2]
root 19 0.0 0.0 0 0 ? S
avril09 0:00 [migration/2]
root 21 0.0 0.0 0 0 ? S< avril09 0:00
[kworker/2:0H]
root 22 0.0 0.0 0 0 ? S avril09 0:01 [watchdog/3]
root 23
0.0 0.0 0 0 ? S avril09 0:17 [ksoftirqd/3]
root 24 0.0 0.0 0 0 ? S
avril09 0:00 [migration/3]
root 26 0.0 0.0 0 0 ? S< avril09 0:00
[kworker/3:0H]
root 27 0.0 0.0 0 0 ? S avril09 0:01 [watchdog/4]
root 28
0.0 0.0 0 0 ? S avril09 0:00 [ksoftirqd/4]
root 29 0.0 0.0 0 0 ? S
avril09 0:00 [migration/4]
root 30 0.0 0.0 0 0 ? S avril09 0:00
[kworker/4:0]
root 31 0.0 0.0 0 0 ? S< avril09 0:00 [kworker/4:0H]
root
32 0.0 0.0 0 0 ? S avril09 0:01 [watchdog/5]
root 33 0.0 0.0 0 0 ? S
avril09 0:08 [ksoftirqd/5]
root 34 0.0 0.0 0 0 ? S avril09 0:00
[migration/5]
root 36 0.0 0.0 0 0 ? S< avril09 0:00 [kworker/5:0H]
root
37 0.0 0.0 0 0 ? S avril09 0:01 [watchdog/6]
root 38 0.0 0.0 0 0 ? S
avril09 0:10 [ksoftirqd/6]
root 39 0.0 0.0 0 0 ? S avril09 0:00
[migration/6]
root 40 0.0 0.0 0 0 ? S avril09 0:00 [kworker/6:0]
root 41
0.0 0.0 0 0 ? S< avril09 0:00 [kworker/6:0H]
root 42 0.0 0.0 0 0 ? S
avril09 0:01 [watchdog/7]
root 43 0.0 0.0 0 0 ? S avril09 0:08
[ksoftirqd/7]
root 44 0.0 0.0 0 0 ? S avril09 0:00 [migration/7]
root 45
0.0 0.0 0 0 ? S avril09 0:00 [kworker/7:0]
root 46 0.0 0.0 0 0 ? S<
avril09 0:00 [kworker/7:0H]
root 47 0.0 0.0 0 0 ? S< avril09 0:00
[cpuset]
root 48 0.0 0.0 0 0 ? S< avril09 0:00 [khelper]
root 49 0.0 0.0
0 0 ? S avril09 0:00 [kdevtmpfs]
root 50 0.0 0.0 0 0 ? S< avril09 0:00
[netns]
root 51 0.0 0.0 0 0 ? S avril09 0:00 [bdi-default]
root 52 0.0
0.0 0 0 ? S< avril09 0:00 [kintegrityd]
root 53 0.0 0.0 0 0 ? S< avril09
0:00 [kblockd]
root 54 0.0 0.0 0 0 ? S avril09 0:31 [kworker/0:1]
root
55 0.0 0.0 0 0 ? S avril09 0:00 [khungtaskd]
root 56 0.0 0.0 0 0 ? S
avril09 0:00 [kswapd0]
root 57 0.0 0.0 0 0 ? SN avril09 0:00 [ksmd]
root
58 0.0 0.0 0 0 ? SN avril09 0:04 [khugepaged]
root 59 0.0 0.0 0 0 ? S
avril09 0:00 [fsnotify_mark]
root 60 0.0 0.0 0 0 ? S< avril09 0:00
[crypto]
root 64 0.0 0.0 0 0 ? S< avril09 0:00 [deferwq]
root 66 0.0 0.0
0 0 ? S avril09 0:00 [kworker/0:2]
root 67 0.0 0.0 0 0 ? S avril09 0:08
[kworker/5:1]
root 131 0.0 0.0 0 0 ? S avril09 0:00 [khubd]
root 202 0.0
0.0 0 0 ? S< avril09 0:00 [ata_sff]
root 206 0.0 0.0 0 0 ? S avril09
0:00 [scsi_eh_0]
root 207 0.0 0.0 0 0 ? S avril09 0:00 [scsi_eh_1]
root
208 0.0 0.0 0 0 ? S avril09 0:00 [scsi_eh_2]
root 209 0.0 0.0 0 0 ? S
avril09 0:00 [scsi_eh_3]
root 210 0.0 0.0 0 0 ? S avril09 0:00
[scsi_eh_4]
root 211 0.0 0.0 0 0 ? S avril09 0:00 [scsi_eh_5]
root 214
0.0 0.0 0 0 ? S avril09 0:00 [kworker/u:4]
root 215 0.0 0.0 0 0 ? S
avril09 0:00 [kworker/u:5]
root 218 0.0 0.0 0 0 ? S avril09 0:12
[kworker/1:1]
root 219 0.0 0.0 0 0 ? S avril09 0:08 [kworker/7:1]
root
226 0.0 0.0 0 0 ? S avril09 0:13 [kworker/3:1]
root 230 0.0 0.0 0 0 ? S
avril09 0:00 [kworker/5:2]
root 232 0.0 0.0 0 0 ? S< avril09 0:06
[kworker/0:1H]
root 241 0.0 0.0 0 0 ? S< avril09 0:00
[kworker/5:1H]
root 247 0.0 0.0 0 0 ? S< avril09 0:00
[kworker/4:1H]
root 251 0.0 0.0 0 0 ? S< avril09 0:00
[kworker/1:1H]
root 252 0.0 0.0 0 0 ? S< avril09 0:00
[kworker/6:1H]
root 253 0.0 0.0 0 0 ? S avril09 0:00 [kworker/2:1]
root
254 0.0 0.0 0 0 ? S avril09 0:08 [kjournald]
root 258 0.0 0.0 0 0 ? S
avril09 0:08 [kworker/6:1]
root 267 0.0 0.0 0 0 ? S< avril09 0:00
[kworker/3:1H]
root 337 0.0 0.0 0 0 ? S< avril09 0:00
[kworker/2:1H]
root 394 0.0 0.0 0 0 ? S< avril09 0:00
[kworker/7:1H]
root 402 0.0 0.0 21836 1784 ? Ss avril09 0:00 udevd
--daemon
root 583 0.0 0.0 0 0 ? S< avril09 0:00 [kpsmoused]
root 584 0.0
0.0 0 0 ? S avril09 0:00 [kworker/1:2]
root 691 0.0 0.0 0 0 ? S avril09
0:06 [kworker/4:2]
root 746 0.0 0.0 0 0 ? S< avril09 0:00
[kvm-irqfd-clean]
root 748 0.0 0.0 0 0 ? S< avril09 0:00
[hd-audio0]
root 772 0.0 0.0 96268 4048 ? Ss 17:28 0:00 sshd: lalexandre
[priv]
10084 777 0.0 0.0 96268 1892 ? S 17:28 0:00 sshd:
lalexandre@pts/1
10084 778 0.0 0.0 24008 4748 pts/1 Ss+ 17:28 0:00
-bash
root 953 0.0 0.0 0 0 ? S avril09 0:05 [flush-8:0]
root 1037 0.0
0.0 0 0 ? S 17:33 0:00 [flush-0:22]
root 1043 0.0 0.0 18900 1280 pts/4
R+ 17:

Re: Why the niceness is not always taken into account ?

2013-04-16 Thread Kristof Provost
On 2013-04-16 10:35:05 (+0200), Alexandre Laurent  
wrote:
> My problem is that in some cases it is not working at all. It works
> fine if I am running both programs in the same instance of the 
> terminal,
> or from a script (so, same instance of interpreter). But this is not
> working if I am running the instances in separate SSH session. When I
> say it is not working, both instances will take 24s to run and the CPU
> usage is just shared between the tasks.

Is it possible that you're running systemd or something else which is
configuring cgroups?

Regards,
Kristof

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Why the niceness is not always taken into account ?

2013-04-16 Thread Alexandre Laurent
Hello,

I have some questions about nice and programs scheduling.
My machines have an 8 cores CPU. I am using an CPU intensive test,
using OpenMP and running on the 8 cores for the experiments.

When I am running my test it is taking around 12s. If I am starting at
the same time two instances of this test it takes 24s which is totally
fine and expected.

I wanted to prioritize one instance of the test with nice. To do this,
I am applying a niceness of -20 to privileged one, and setting a
niceness of 20 to "slow down" the second one.
This is usually working well. The privileged one with run around 12s
(so, at full speed) and the other one 24s (-> paused for 12s and 
running
the 12 last seconds at full speed).

My problem is that in some cases it is not working at all. It works
fine if I am running both programs in the same instance of the 
terminal,
or from a script (so, same instance of interpreter). But this is not
working if I am running the instances in separate SSH session. When I
say it is not working, both instances will take 24s to run and the CPU
usage is just shared between the tasks. More precisely :

I am running the same test, but connecting twice on the remote machine
(one connection by test instance). I am using exactly the same commands
than during the others experiments. But, by using two SSH instances, 
the
niceness will not be taken into account. The CPU will be shared equally
between both instances even if htop is showing a niceness of 19 / -20
for the low priority program and the privileged program respectively.

(For information, if I am running my program throught SSH using a
script, and even by running the commands directly throught the SSH
terminal it will work as expected. So, the cause is not on SSH.)

Can you explain me, why in such case, the niceness is not taken into
account ?
Can you tell me how I can workaround this problem to effectively set
niceness and get it respected by the system ?

Best regards,

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies