Re: Why the niceness is not always taken into account ?

2013-04-19 Thread Alexandre Laurent
 

Ok :) 

Thank you very much :) 

Le 18.04.2013 19:30,
valdis.kletni...@vt.edu a écrit : 

> On Thu, 18 Apr 2013 17:56:58
+0200, Alexandre Laurent said:
> 
>> My question was more like : is
there a way (like giving hint) to ask the autogroup system to group two
SSH sesssions in order to get nice behaving as expected without
disabling the whole autogroup system.
> 
> Sure. Launch both SSH'es so
they have the same control terminal.
> 
> (And yes, that does get
problematic, trying to run two ssh'es in the
> same xterm/whatever :)
>

> SCHED_AUTOGROUP is *not*, repeat *NOT* very flexible. It implements
one
> policy that happens to be very simple to code and yet work well
for a lot
> of use cases. You want something different, you can't use
AUTOGROUP for it.
 ___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why the niceness is not always taken into account ?

2013-04-18 Thread Alexandre Laurent
I do not want to touch the kernel at all. I was giving the information 
about cgroups since it was one of the question asked before. I 
understood that the SCHED_AUTOGROUP is not using at all cgroups.

My question was more like : is there a way (like giving hint) to ask 
the autogroup system to group two SSH sesssions in order to get nice 
behaving as expected without disabling the whole autogroup system.

Le 18.04.2013 17:32, valdis.kletni...@vt.edu a écrit :

> On Thu, 18 Apr 2013 16:57:41 +0200, Alexandre Laurent said:
>
>> Note : the cgroups are not mounted at all.
>
> The cgroups filesystem doesn't have to be mounted for that - the 
> kernel
> handles that internally.
>
>> I still have a little question about it : Is it possible to force 
>> the
>> grouping of specific tasks ? (Which could be better than just 
>> disabling
>> the feature)
>
> At that point, you're better off mounting the cgroups filesystem and
> using
> something like systemd to put tasks into cgroups and control them. 
> It's
> a Really Bad Idea to try to handle that in-kernel. SCHED_AUTOGROUP 
> relies
> on the fact that many heavy-load processes are launched from xterms, 
> so
> grouping "everything in each xterm" into a separate group and then 
> one
> group
> for everything launched from the desktop works pretty well. and is 
> really
> bog-simple to code. Trying to do anything more complicated in-kernel 
> will
> be a mess, because nobody agrees on a policy that should be used 
> (other
> than
> the one used by AUTOGROUP)

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why the niceness is not always taken into account ?

2013-04-18 Thread Alexandre Laurent
Hello,

Disabling SCHED_AUTOGROUP (by using the flag 
kernel.sched_autogroup_enabled
with sysctl) did work very well. Thank you a lot.

This is understandable, when reading the following description :
"
This option optimizes the scheduler for common desktop workloads by 
automatically creating and populating task groups. This separation of 
workloads isolates aggressive CPU burners (like build jobs) from desktop 
applications. Task group autogeneration is currently based upon task 
session.
"

Note : the cgroups are not mounted at all.

I still have a little question about it :
Is it possible to force the grouping of specific tasks ?
(Which could be better than just disabling the feature)

Best regards,

Le 16.04.2013 19:59, Kristof Provost a écrit :

> On 2013-04-16 17:38:50 (+0200), Alexandre Laurent
>  wrote:
>
>> On the computer where I am testing, I have nothing related to 
>> cgroups.
>> Here a 'ps aux' in case I am missing something.
>
> cgroups wouldn't actually show up in the process list. Check mount to
> see if anyone mounts an fs of type 'cgroup'.
>
> It's perhaps even more likely that it's related to SCHED_AUTOGROUP as
> Michi suggested.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Why the niceness is not always taken into account ?

2013-04-16 Thread Alexandre Laurent
:0]
root 1037 0.0
0.0 0 0 ? S 17:33 0:00 [flush-0:22]
root 1043 0.0 0.0 18900 1280 pts/4
R+ 17:35 0:00 ps aux
root 1687 0.0 0.0 0 0 ? S avril09 0:00
[kjournald]
root 1973 0.0 0.0 18972 940 ? Ss avril09 0:00 /sbin/rpcbind
-w
statd 2004 0.0 0.0 23344 1344 ? Ss avril09 0:00 /sbin/rpc.statd
root
2009 0.0 0.0 0 0 ? S< avril09 0:00 [rpciod]
root 2011 0.0 0.0 0 0 ? S<
avril09 0:00 [nfsiod]
root 2020 0.0 0.0 29500 696 ? Ss avril09 0:00
/usr/sbin/rpc.idmapd
root 2397 0.0 0.0 118444 1804 ? Sl avril09 0:06
/usr/sbin/rsyslogd -c5
root 2399 0.0 0.0 3936 80 ? Ss avril09 0:00
/usr/sbin/acpi_fakekeyd
daemon 2513 0.0 0.0 16672 148 ? Ss avril09 0:00
/usr/sbin/atd
root 2615 0.0 0.0 4248 832 ? Ss avril09 0:00
/usr/sbin/acpid
ntp 2656 0.0 0.0 41064 2404 ? Ss avril09 0:17
/usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 115:122
root 2894 0.0 0.0
61128 2016 ? Ssl avril09 0:06 /usr/sbin/automount --pid-file
/var/run/autofs.pid
102 2900 0.0 0.0 32816 1956 ? Ss avril09 0:00
/usr/bin/dbus-daemon --system
root 2951 0.0 0.0 120364 2752 ? Ss avril09
0:21 /usr/sbin/sssd -D -f
root 2961 0.0 0.0 149132 10352 ? SL avril09
1:57 /usr/lib/sssd/sssd/sssd_be --domain LDAP --debug-to-files
root 2963
0.0 0.0 104652 3452 ? S avril09 0:06 /usr/lib/sssd/sssd/sssd_nss
--debug-to-files
root 2964 0.0 0.0 104560 3432 ? S avril09 0:05
/usr/lib/sssd/sssd/sssd_pam --debug-to-files
root 2966 0.0 0.0 58108
3868 ? S avril09 0:02 /opt/cisco/vpn/bin/vpnagentd
avahi 2990 0.0 0.0
36660 2336 ? S avril09 1:34 avahi-daemon: running [auric.local]
avahi
2993 0.0 0.0 36108 468 ? S avril09 0:00 avahi-daemon: chroot helper
root
2994 0.0 0.0 156116 5264 ? Ssl avril09 0:01
/usr/sbin/NetworkManager
root 3091 0.0 0.0 134436 4536 ? Sl avril09 0:00
/usr/lib/policykit-1/polkitd --no-debug
root 3095 0.0 0.0 80868 3228 ? S
avril09 0:00 /usr/sbin/modem-manager
105 3161 0.0 0.0 46804 1264 ? Ss
avril09 0:00 /usr/sbin/exim4 -bd -q30m
root 3181 0.0 0.0 0 0 ? S<
avril09 0:00 [krfcommd]
root 3245 0.0 0.0 20408 1052 ? Ss avril09 0:00
/usr/sbin/cron
root 3284 0.0 0.0 77856 3488 ? Ss avril09 0:08
/usr/sbin/cupsd -C /etc/cups/cupsd.conf
colord 3286 0.0 0.0 150028 4580
? Sl avril09 0:00 /usr/lib/x86_64-linux-gnu/colord/colord
colord 3307
0.0 0.0 361500 10408 ? Sl avril09 0:00
/usr/lib/x86_64-linux-gnu/colord/colord-sane
root 3422 0.0 0.0 16256 936
tty1 Ss+ avril09 0:00 /sbin/getty 38400 tty1
root 3423 0.0 0.0 16256 944
tty2 Ss+ avril09 0:00 /sbin/getty 38400 tty2
root 3424 0.0 0.0 16256 936
tty3 Ss+ avril09 0:00 /sbin/getty 38400 tty3
root 3425 0.0 0.0 16256 932
tty4 Ss+ avril09 0:00 /sbin/getty 38400 tty4
root 3426 0.0 0.0 16256 944
tty5 Ss+ avril09 0:00 /sbin/getty 38400 tty5
root 3427 0.0 0.0 16256 932
tty6 Ss+ avril09 0:00 /sbin/getty 38400 tty6
root 3428 0.0 0.0 0 0 ? R
avril09 0:13 [kworker/2:2]
root 3434 0.0 0.0 0 0 ? S avril09 0:00
[kworker/3:2]
114 3445 0.0 0.0 32280 1176 ? Ss avril09 0:00
/usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7
--session
root 3449 0.0 0.0 129204 3324 ? Sl avril09 0:00
/usr/lib/accountsservice/accounts-daemon
root 3453 0.0 0.0 193108 4112 ?
Sl avril09 0:00 /usr/sbin/console-kit-daemon --no-daemon
114 3635 0.0
0.0 53664 2388 ? S avril09 0:00 /usr/lib/gvfs/gvfsd
rtkit 3683 0.0 0.0
105168 1332 ? SNl avril09 0:06 /usr/lib/rtkit/rtkit-daemon
114 3687 0.0
0.0 218500 3468 ? Sl avril09 0:00
/usr/lib/at-spi2-core/at-spi-bus-launcher
114 3691 0.0 0.0 32008 1604 ?
S avril09 0:00 /usr/bin/dbus-daemon
--config-file=/etc/at-spi2/accessibility.conf --nofork --print-address
3
root 3711 0.0 0.0 9960 3712 ? S avril09 0:00 /sbin/dhclient -d -4 -sf
/usr/lib/NetworkManager/nm-dhcp-client.action -pf
/var/run/dhclient-eth0.pid -lf
/var/lib/dhcp/dhclient-1912e92e-6ddc-
root 3792 0.0 0.0 0 0 ? S avril09
0:00 [nfsv4.0-svc]
root 3854 0.0 0.0 49852 1172 ? Ss avril09 0:00
/usr/sbin/sshd
root 5233 0.0 0.0 21832 1456 ? S avril09 0:00 udevd
--daemon
root 5234 0.0 0.0 21832 1444 ? S avril09 0:00 udevd
--daemon
root 20780 0.0 0.0 96268 4052 ? Ss avril12 0:00 sshd:
lalexandre [priv]
10084 20785 0.0 0.0 96268 1892 ? S avril12 0:00 sshd:
lalexandre@pts/3
10084 20786 0.0 0.0 23856 4480 pts/3 Ss avril12 0:00
-bash
root 20894 0.0 0.0 96268 4052 ? Ss avril12 0:00 sshd: lalexandre
[priv]
10084 20899 0.0 0.0 96404 1892 ? S avril12 0:00 sshd:
lalexandre@pts/4
10084 20900 0.0 0.0 23956 4696 pts/4 Ss avril12 0:00
-bash
root 21028 0.0 0.0 64908 2184 pts/4 S avril12 0:00 sudo bash
root
21029 0.0 0.0 19448 2208 pts/4 S avril12 0:00 bash
root 21044 0.0 0.0
64908 2200 pts/3 S avril12 0:00 sudo bash
root 21045 0.0 0.0 19448 2204
pts/3 S+ avril12 0:00 bash

Thank you 

Le 16.04.2013 13:03, Kristof
Provost a écrit : 

> On 2013-04-16 10:35:05 (+0200), Alexandre Laurent
 wrote:
> 
>> My problem is that in some
cases it is not working at all. It works fine if I am running both
programs in the same instance of the terminal, or from a script (so,
same instance of interpreter). But this is not working if I am running
the instances in separate SSH session. When I say it is not working,
both

Why the niceness is not always taken into account ?

2013-04-16 Thread Alexandre Laurent
Hello,

I have some questions about nice and programs scheduling.
My machines have an 8 cores CPU. I am using an CPU intensive test,
using OpenMP and running on the 8 cores for the experiments.

When I am running my test it is taking around 12s. If I am starting at
the same time two instances of this test it takes 24s which is totally
fine and expected.

I wanted to prioritize one instance of the test with nice. To do this,
I am applying a niceness of -20 to privileged one, and setting a
niceness of 20 to "slow down" the second one.
This is usually working well. The privileged one with run around 12s
(so, at full speed) and the other one 24s (-> paused for 12s and 
running
the 12 last seconds at full speed).

My problem is that in some cases it is not working at all. It works
fine if I am running both programs in the same instance of the 
terminal,
or from a script (so, same instance of interpreter). But this is not
working if I am running the instances in separate SSH session. When I
say it is not working, both instances will take 24s to run and the CPU
usage is just shared between the tasks. More precisely :

I am running the same test, but connecting twice on the remote machine
(one connection by test instance). I am using exactly the same commands
than during the others experiments. But, by using two SSH instances, 
the
niceness will not be taken into account. The CPU will be shared equally
between both instances even if htop is showing a niceness of 19 / -20
for the low priority program and the privileged program respectively.

(For information, if I am running my program throught SSH using a
script, and even by running the commands directly throught the SSH
terminal it will work as expected. So, the cause is not on SSH.)

Can you explain me, why in such case, the niceness is not taken into
account ?
Can you tell me how I can workaround this problem to effectively set
niceness and get it respected by the system ?

Best regards,

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies