Excuse me, i have committed a stupid mistake. The extra mpihello
processes were leftovers from previous runs (sge processes aborted by
qdel command). Now in this detail the world is as it should be. The
number of processes on the nodes now sums to the number of the allocated
slots.
I have attached the output of the 'ps -e f' command of the master node
and the output of the 'qstat -g t -u ulrich' command.
This seems to me to be correct.
Remains the original problem, why jobs allocate cores on node but do
nothing.
As i wrote before, there is propably no OpenMP incidence.
The qmaster/messages file does not say anything about hanging/pending jobs.
The problem is that i could not reproduce today nodes which do nothing
despite their cores are allocated. let me test a bit until i reproduce
the problem. Then i will send you the output of 'ps -e f' and qstat.
Is there anything else which i could test?
With kind regards, and thanks a lot for your help so far, ulrich
On 08/15/2016 05:37 PM, Reuti wrote:
>
>> Am 15.08.2016 um 17:03 schrieb Ulrich Hiller <[email protected]>:
>>
>> Hello,
>>
>> thank you for the clarification. I must have misunderstood you.
>> Now i did it.The master node was in the example i send now exec-node01
>> (it varied from attempt to attempt). The output is in the master-node
>> file. The qstat file is the output of
>> qstat -g t -u '*'
>> That seems to look normal.
>>
>> Now i created a simple C file with an endless loop.
>> #include <stdio.h>
>> int main()
>> {
>> int x;
>> for(x=0;x=10;x=x+1)
>> {
>> puts("Hello");
>> ;
>> }
>> return(0);
>> }
>>
>> and compiled it:
>> mpicc mpihello.c -o mpihello
>> and started qsub:
>> qsub -pe orte 300 -j yes -cwd -S /bin/bash <<< "mpiexec -n 300 mpihello"
>> The outputs look the same as for the sleep command above.
>> But now i counted the jobs:
>>
>> qstat -g t -u '*' | grep -ic slave
>> This results in the number '300', which i expected.
>>
>> On the execute nodes i did:
>> ps -ef | grep mpihello | grep -v grep | grep -vc mpiexec
>
> f w/o -
>
> $ ps -e f
>
> will list a nice tree of the processes.
>
>
>> (i counted the 'mpihello' processes)
>> This is the result:
>> exec-node01: 43
>> exec-node02: 82
>> exec-node03: 83
>> exec-node04: 82
>> exec-node05: 82
>> exec-node06: 80
>> exec-node07: 64
>> exec-node08: 64
>
> To investigate this it would be good to post the complete slot allocation by
> `qstat -g t -u <your user>`, the master of the MPI application and one of the
> slave nodes' `ps -e f --cols=500`. Any "mpihello" in the path?
>
> -- Reuti
>
>
>> Which gives the sum of 580.
>> When i count the number of free solts together (from 'qhost -q') i also
>> get 300, which i expect.
>> Where do the extra processes on the nodes come from?
>>
>> This difference is reproducible.
>>
>> libgomp.so.1.0.0 library is installed, but aqpart from that nothing with
>> OpenMP.
>>
>> With kind regards, ulrich
>>
>>
>>
>>
>>
>>
>>
>> On 08/15/2016 02:30 PM, Ulrich Hiller wrote:
>>> Hello,
>>>
>>>> The other issue seems to be, that in fact your job is using only one
>>> machine, which means that it is essentially ignoring any granted slot
>>> allocation. While the job is running, can you please execute on the
>>> master node of the parallel job:
>>>>
>>>> $ ps -e f
>>>>
>>>> (f w/o -) and post the relevant lines belonging to either sge_execd or
>>> just running as kids of the init process, in case they jumped out of the
>>> process tree. Maybe a good start would be to execute something like
>>> `mpiexec sleep 300` in the jobscript.
>>>>
>>>
>>> i invoked
>>> qsub -pe orte 160 -j yes -cwd -S /bin/bash <<< "mpiexec -n 160 sleep 300"
>>>
>>> the only line ('ps -e f') on the master node was:
>>> 55722 ? Sl 3:42 /opt/sge/bin/lx-amd64/sge_qmaster
>>>
>>> No other sge lines, no child processes from it, and no other init
>>> processes leading to sge While at the same time the sleep processes were
>>> running on the nodes (Checked with ps command on the nodes).
>>>
>>> The qstat command gave :
>>> 264 0.60500 STDIN ulrich r 08/15/2016 11:33:02
>>> all.q@exec-node01 MASTER
>>>
>>> all.q@exec-node01 SLAVE
>>>
>>> all.q@exec-node01 SLAVE
>>>
>>> all.q@exec-node01 SLAVE
>>> [ ...]
>>>
>>> 264 0.60500 STDIN ulrich r 08/15/2016 11:33:02
>>> all.q@exec-node03 SLAVE
>>>
>>> all.q@exec-node03 SLAVE
>>>
>>> all.q@exec-node03 SLAVE
>>> [ ... ]
>>> 264 0.60500 STDIN ulrich r 08/15/2016 11:33:02
>>> all.q@exec-node05 SLAVE
>>>
>>> all.q@exec-node05 SLAVE
>>> [ ...]
>>>
>>>
>>> Because there was only the master deamon running on the master node, and
>>> you were tlaking about child processes: Was this now normal behaviour my
>>> cluster showed or is there something wrong?
>>>
>>> Kind reagrds, ulrich
>>>
>>>
>>>
>>> On 08/12/2016 07:11 PM, Reuti wrote:
>>>> Hi,
>>>>
>>>>> Am 12.08.2016 um 18:48 schrieb Ulrich Hiller <[email protected]>:
>>>>>
>>>>> Hello,
>>>>>
>>>>> i have a strange effect, where i am not sure whether it is "only" a
>>>>> misconfiguration or a bug.
>>>>>
>>>>> First: I run son of gridengine 8.1.9-1.el6.x86_64 (i installed the rhel
>>>>> rpm on an opensuse 13.1 machine. This should not matter in this case,
>>>>> and it is reported to be able to run on opensuse).
>>>>>
>>>>> mpirun and mpiexec are from openmpi-1.10.3 (no other mpi was installed,
>>>>> neither on master, nor on slaves). The installation was made with:
>>>>> ./configure --prefix=`pwd`/build --disable-dlopen --disable-mca-dso
>>>>> --with-orte --with-sge --with-x --enable-mpi-thread-multiple
>>>>> --enable-orterun-prefix-by-default --enable-mpirun-prefix-by-default
>>>>> --enable-orte-static-ports --enable-mpi-cxx --enable-mpi-cxx-seek
>>>>> --enable-oshmem --enable-java --enable-mpi-java
>>>>> make
>>>>> make install
>>>>>
>>>>> I attached the outputs of 'qconf -ap all.q' , 'qconf -sconf' and 'qconf
>>>>> -sp orte' as textfiles.
>>>>>
>>>>> Now my problem:
>>>>> I asked for 20 cores and if i run qstat -u '*' it shows that this job
>>>>> is being run in slave07 using 20 cores but is not true! if i run qstat
>>>>> -f -u '*' i see that this job is only using 3 cores in salve07 and
>>>>> there are 17 cores in other nodes allocated to this job which are in fact
>>>>> unused!
>>>>
>>>> qstat will list only the master node of the parallel job and the number of
>>>> overall slots. The granted allocation you can check with:
>>>>
>>>> $ qstat -g t -u '*'
>>>>
>>>> The other issue seems to be, that in fact your job is using only one
>>>> machine, which means that it is essentially ignoring any granted slot
>>>> allocation. While the job is running, can you please execute on the master
>>>> node of the parallel job:
>>>>
>>>> $ ps -e f
>>>>
>>>> (f w/o -) and post the relevant lines belonging to either sge_execd or
>>>> just running as kids of the init process, in case they jumped out of the
>>>> process tree. Maybe a good start would be to execute something like
>>>> `mpiexec sleep 300` in the jobscript.
>>>>
>>>> Next step could be a `mpihello.c` where you put an almost endless loop
>>>> inside and switch off all optimizations during compilations to check
>>>> whether these slave processes are distributed in the correct way.
>>>>
>>>> Note that some applications will check the number of cores they are
>>>> running on and start by OpenMP (not Open MPI) as many threads as cores are
>>>> found. Could this be the case for your application too?
>>>>
>>>> -- Reuti
>>>>
>>>>
>>>>> Or other example:
>>>>> My job took say 6 cpus on slave07 and 14 on slave06 but nothing was
>>>>> running on 06 and therefore a waste of ressource on 06 and overload on
>>>>> 07 becomes highly possible (the numbers are made up).
>>>>> If i ran 1 Cpus in many independent jobs that would not be an issue, but
>>>>> imagine i now request 60 cpus on slave07, that would seriously overload
>>>>> the node in many cases.
>>>>>
>>>>> Or other example:
>>>>> if i ask for say 50 CPUs, the job will start on one node, e.g,
>>>>> slave01, but only reserving say 15 CPUs out of 64 and reserve the rest
>>>>> on many other nodes (obviously wasting space doing nothing).
>>>>> This has the bad consequence of allocating many more CPUs than available
>>>>> when many jobs are running, imagine you have 10 jobs like this one...
>>>>> some nodes will run maybe 3 even if they only have 24 CPUs...
>>>>>
>>>>> I hope that i have made clear what the issue is.
>>>>>
>>>>> I also see that the `qstat` and `qstat -f` are in disagreement. The
>>>>> latter is correct, i checked the processes running on the nodes.
>>>>>
>>>>>
>>>>> Did somebody already encounter such a problem? Does somebody have an
>>>>> idea where to look into or what to test?
>>>>>
>>>>> With kind regards, ulrich
>>>>>
>>>>>
>>>>>
>>>>> <qhost.txt><qconf-sconf.txt><qconf-mp-orte.txt><qconf-all.q>_______________________________________________
>>>>> users mailing list
>>>>> [email protected]
>>>>> https://gridengine.org/mailman/listinfo/users
>>>>
>> <qstat.txt><master-node.txt>_______________________________________________
>> users mailing list
>> [email protected]
>> https://gridengine.org/mailman/listinfo/users
>
PID TTY STAT TIME COMMAND
2 ? S 0:00 [kthreadd]
3 ? S 0:00 \_ [ksoftirqd/0]
4 ? S 0:00 \_ [kworker/0:0]
5 ? S< 0:00 \_ [kworker/0:0H]
6 ? S 0:00 \_ [kworker/u128:0]
7 ? S 0:01 \_ [kworker/u129:0]
8 ? S 0:00 \_ [migration/0]
9 ? S 0:00 \_ [rcuc/0]
10 ? S 0:00 \_ [rcub/0]
11 ? S 0:00 \_ [rcub/1]
12 ? S 0:00 \_ [rcu_preempt]
13 ? S 0:00 \_ [rcuop/0]
14 ? S 0:00 \_ [rcuop/1]
15 ? S 0:00 \_ [rcuop/2]
16 ? S 0:00 \_ [rcuop/3]
17 ? S 0:00 \_ [rcuop/4]
18 ? S 0:00 \_ [rcuop/5]
19 ? S 0:00 \_ [rcuop/6]
20 ? S 0:00 \_ [rcuop/7]
21 ? S 0:00 \_ [rcuop/8]
22 ? S 0:00 \_ [rcuop/9]
23 ? S 0:00 \_ [rcuop/10]
24 ? S 0:00 \_ [rcuop/11]
25 ? S 0:00 \_ [rcuop/12]
26 ? S 0:00 \_ [rcuop/13]
27 ? S 0:00 \_ [rcuop/14]
28 ? S 0:00 \_ [rcuop/15]
29 ? S 0:00 \_ [rcuop/16]
30 ? S 0:00 \_ [rcuop/17]
31 ? S 0:00 \_ [rcuop/18]
32 ? S 0:00 \_ [rcuop/19]
33 ? S 0:00 \_ [rcuop/20]
34 ? S 0:00 \_ [rcuop/21]
35 ? S 0:00 \_ [rcuop/22]
36 ? S 0:00 \_ [rcuop/23]
37 ? S 0:00 \_ [rcuop/24]
38 ? S 0:00 \_ [rcuop/25]
39 ? S 0:00 \_ [rcuop/26]
40 ? S 0:00 \_ [rcuop/27]
41 ? S 0:00 \_ [rcuop/28]
42 ? S 0:00 \_ [rcuop/29]
43 ? S 0:00 \_ [rcuop/30]
44 ? S 0:00 \_ [rcuop/31]
45 ? S 0:00 \_ [rcuop/32]
46 ? S 0:00 \_ [rcuop/33]
47 ? S 0:00 \_ [rcuop/34]
48 ? S 0:00 \_ [rcuop/35]
49 ? S 0:00 \_ [rcuop/36]
50 ? S 0:00 \_ [rcuop/37]
51 ? S 0:00 \_ [rcuop/38]
52 ? S 0:00 \_ [rcuop/39]
53 ? S 0:00 \_ [rcuop/40]
54 ? S 0:00 \_ [rcuop/41]
55 ? S 0:00 \_ [rcuop/42]
56 ? S 0:00 \_ [rcuop/43]
57 ? S 0:00 \_ [rcuop/44]
58 ? S 0:00 \_ [rcuop/45]
59 ? S 0:00 \_ [rcuop/46]
60 ? S 0:00 \_ [rcuop/47]
61 ? S 0:00 \_ [rcuop/48]
62 ? S 0:00 \_ [rcuop/49]
63 ? S 0:00 \_ [rcuop/50]
64 ? S 0:00 \_ [rcuop/51]
65 ? S 0:00 \_ [rcuop/52]
66 ? S 0:00 \_ [rcuop/53]
67 ? S 0:00 \_ [rcuop/54]
68 ? S 0:00 \_ [rcuop/55]
69 ? S 0:00 \_ [rcuop/56]
70 ? S 0:00 \_ [rcuop/57]
71 ? S 0:00 \_ [rcuop/58]
72 ? S 0:00 \_ [rcuop/59]
73 ? S 0:00 \_ [rcuop/60]
74 ? S 0:00 \_ [rcuop/61]
75 ? S 0:00 \_ [rcuop/62]
76 ? S 0:00 \_ [rcuop/63]
77 ? S 0:00 \_ [rcu_bh]
78 ? S 0:00 \_ [rcuob/0]
79 ? S 0:00 \_ [rcuob/1]
80 ? S 0:00 \_ [rcuob/2]
81 ? S 0:00 \_ [rcuob/3]
82 ? S 0:00 \_ [rcuob/4]
83 ? S 0:00 \_ [rcuob/5]
84 ? S 0:00 \_ [rcuob/6]
85 ? S 0:00 \_ [rcuob/7]
86 ? S 0:00 \_ [rcuob/8]
87 ? S 0:00 \_ [rcuob/9]
88 ? S 0:00 \_ [rcuob/10]
89 ? S 0:00 \_ [rcuob/11]
90 ? S 0:00 \_ [rcuob/12]
91 ? S 0:00 \_ [rcuob/13]
92 ? S 0:00 \_ [rcuob/14]
93 ? S 0:00 \_ [rcuob/15]
94 ? S 0:00 \_ [rcuob/16]
95 ? S 0:00 \_ [rcuob/17]
96 ? S 0:00 \_ [rcuob/18]
97 ? S 0:00 \_ [rcuob/19]
98 ? S 0:00 \_ [rcuob/20]
99 ? S 0:00 \_ [rcuob/21]
100 ? S 0:00 \_ [rcuob/22]
101 ? S 0:00 \_ [rcuob/23]
102 ? S 0:00 \_ [rcuob/24]
103 ? S 0:00 \_ [rcuob/25]
104 ? S 0:00 \_ [rcuob/26]
105 ? S 0:00 \_ [rcuob/27]
106 ? S 0:00 \_ [rcuob/28]
107 ? S 0:00 \_ [rcuob/29]
108 ? S 0:00 \_ [rcuob/30]
109 ? S 0:00 \_ [rcuob/31]
110 ? S 0:00 \_ [rcuob/32]
111 ? S 0:00 \_ [rcuob/33]
112 ? S 0:00 \_ [rcuob/34]
113 ? S 0:00 \_ [rcuob/35]
114 ? S 0:00 \_ [rcuob/36]
115 ? S 0:00 \_ [rcuob/37]
116 ? S 0:00 \_ [rcuob/38]
117 ? S 0:00 \_ [rcuob/39]
118 ? S 0:00 \_ [rcuob/40]
119 ? S 0:00 \_ [rcuob/41]
120 ? S 0:00 \_ [rcuob/42]
121 ? S 0:00 \_ [rcuob/43]
122 ? S 0:00 \_ [rcuob/44]
123 ? S 0:00 \_ [rcuob/45]
124 ? S 0:00 \_ [rcuob/46]
125 ? S 0:00 \_ [rcuob/47]
126 ? S 0:00 \_ [rcuob/48]
127 ? S 0:00 \_ [rcuob/49]
128 ? S 0:00 \_ [rcuob/50]
129 ? S 0:00 \_ [rcuob/51]
130 ? S 0:00 \_ [rcuob/52]
131 ? S 0:00 \_ [rcuob/53]
132 ? S 0:00 \_ [rcuob/54]
133 ? S 0:00 \_ [rcuob/55]
134 ? S 0:00 \_ [rcuob/56]
135 ? S 0:00 \_ [rcuob/57]
136 ? S 0:00 \_ [rcuob/58]
137 ? S 0:00 \_ [rcuob/59]
138 ? S 0:00 \_ [rcuob/60]
139 ? S 0:00 \_ [rcuob/61]
140 ? S 0:00 \_ [rcuob/62]
141 ? S 0:00 \_ [rcuob/63]
142 ? S 0:00 \_ [rcu_sched]
143 ? S 0:00 \_ [rcuos/0]
144 ? S 0:00 \_ [rcuos/1]
145 ? S 0:00 \_ [rcuos/2]
146 ? S 0:00 \_ [rcuos/3]
147 ? S 0:00 \_ [rcuos/4]
148 ? S 0:00 \_ [rcuos/5]
149 ? S 0:00 \_ [rcuos/6]
150 ? S 0:00 \_ [rcuos/7]
151 ? S 0:00 \_ [rcuos/8]
152 ? S 0:00 \_ [rcuos/9]
153 ? S 0:00 \_ [rcuos/10]
154 ? S 0:00 \_ [rcuos/11]
155 ? S 0:00 \_ [rcuos/12]
156 ? S 0:00 \_ [rcuos/13]
157 ? S 0:00 \_ [rcuos/14]
158 ? S 0:00 \_ [rcuos/15]
159 ? S 0:00 \_ [rcuos/16]
160 ? S 0:00 \_ [rcuos/17]
161 ? S 0:00 \_ [rcuos/18]
162 ? S 0:00 \_ [rcuos/19]
163 ? S 0:00 \_ [rcuos/20]
164 ? S 0:00 \_ [rcuos/21]
165 ? S 0:00 \_ [rcuos/22]
166 ? S 0:00 \_ [rcuos/23]
167 ? S 0:00 \_ [rcuos/24]
168 ? S 0:00 \_ [rcuos/25]
169 ? S 0:00 \_ [rcuos/26]
170 ? S 0:00 \_ [rcuos/27]
171 ? S 0:00 \_ [rcuos/28]
172 ? S 0:00 \_ [rcuos/29]
173 ? S 0:00 \_ [rcuos/30]
174 ? S 0:00 \_ [rcuos/31]
175 ? S 0:00 \_ [rcuos/32]
176 ? S 0:00 \_ [rcuos/33]
177 ? S 0:00 \_ [rcuos/34]
178 ? S 0:00 \_ [rcuos/35]
179 ? S 0:00 \_ [rcuos/36]
180 ? S 0:00 \_ [rcuos/37]
181 ? S 0:00 \_ [rcuos/38]
182 ? S 0:00 \_ [rcuos/39]
183 ? S 0:00 \_ [rcuos/40]
184 ? S 0:00 \_ [rcuos/41]
185 ? S 0:00 \_ [rcuos/42]
186 ? S 0:00 \_ [rcuos/43]
187 ? S 0:00 \_ [rcuos/44]
188 ? S 0:00 \_ [rcuos/45]
189 ? S 0:00 \_ [rcuos/46]
190 ? S 0:00 \_ [rcuos/47]
191 ? S 0:00 \_ [rcuos/48]
192 ? S 0:00 \_ [rcuos/49]
193 ? S 0:00 \_ [rcuos/50]
194 ? S 0:00 \_ [rcuos/51]
195 ? S 0:00 \_ [rcuos/52]
196 ? S 0:00 \_ [rcuos/53]
197 ? S 0:00 \_ [rcuos/54]
198 ? S 0:00 \_ [rcuos/55]
199 ? S 0:00 \_ [rcuos/56]
200 ? S 0:00 \_ [rcuos/57]
201 ? S 0:00 \_ [rcuos/58]
202 ? S 0:00 \_ [rcuos/59]
203 ? S 0:00 \_ [rcuos/60]
204 ? S 0:00 \_ [rcuos/61]
205 ? S 0:00 \_ [rcuos/62]
206 ? S 0:00 \_ [rcuos/63]
207 ? S 0:00 \_ [watchdog/0]
208 ? S 0:00 \_ [watchdog/1]
209 ? S 0:00 \_ [rcuc/1]
210 ? S 0:00 \_ [migration/1]
211 ? S 0:00 \_ [ksoftirqd/1]
213 ? S< 0:00 \_ [kworker/1:0H]
214 ? S 0:00 \_ [watchdog/2]
215 ? S 0:00 \_ [rcuc/2]
216 ? S 0:00 \_ [migration/2]
217 ? S 0:00 \_ [ksoftirqd/2]
219 ? S< 0:00 \_ [kworker/2:0H]
220 ? S 0:00 \_ [watchdog/3]
221 ? S 0:00 \_ [rcuc/3]
222 ? S 0:00 \_ [migration/3]
223 ? S 0:00 \_ [ksoftirqd/3]
224 ? S 0:00 \_ [kworker/3:0]
225 ? S< 0:00 \_ [kworker/3:0H]
226 ? S 0:00 \_ [watchdog/4]
227 ? S 0:00 \_ [rcuc/4]
228 ? S 0:00 \_ [migration/4]
229 ? S 0:00 \_ [ksoftirqd/4]
231 ? S< 0:00 \_ [kworker/4:0H]
232 ? S 0:00 \_ [watchdog/5]
233 ? S 0:00 \_ [rcuc/5]
234 ? S 0:00 \_ [migration/5]
235 ? S 0:00 \_ [ksoftirqd/5]
237 ? S< 0:00 \_ [kworker/5:0H]
238 ? S 0:00 \_ [watchdog/6]
239 ? S 0:00 \_ [rcuc/6]
240 ? S 0:00 \_ [migration/6]
241 ? S 0:00 \_ [ksoftirqd/6]
243 ? S< 0:00 \_ [kworker/6:0H]
244 ? S 0:00 \_ [watchdog/7]
245 ? S 0:00 \_ [rcuc/7]
246 ? S 0:00 \_ [migration/7]
247 ? S 0:00 \_ [ksoftirqd/7]
249 ? S< 0:00 \_ [kworker/7:0H]
250 ? S 0:00 \_ [watchdog/8]
251 ? S 0:00 \_ [rcuc/8]
252 ? S 0:00 \_ [migration/8]
253 ? S 0:00 \_ [ksoftirqd/8]
254 ? S 0:00 \_ [kworker/8:0]
255 ? S< 0:00 \_ [kworker/8:0H]
256 ? S 0:00 \_ [kworker/u130:0]
257 ? S 0:00 \_ [watchdog/9]
258 ? S 0:00 \_ [rcuc/9]
259 ? S 0:00 \_ [migration/9]
260 ? S 0:00 \_ [ksoftirqd/9]
261 ? S 0:00 \_ [kworker/9:0]
262 ? S< 0:00 \_ [kworker/9:0H]
263 ? S 0:00 \_ [watchdog/10]
264 ? S 0:00 \_ [rcuc/10]
265 ? S 0:00 \_ [migration/10]
266 ? S 0:00 \_ [ksoftirqd/10]
267 ? S 0:00 \_ [kworker/10:0]
268 ? S< 0:00 \_ [kworker/10:0H]
269 ? S 0:00 \_ [watchdog/11]
270 ? S 0:00 \_ [rcuc/11]
271 ? S 0:00 \_ [migration/11]
272 ? S 0:00 \_ [ksoftirqd/11]
274 ? S< 0:00 \_ [kworker/11:0H]
275 ? S 0:00 \_ [watchdog/12]
276 ? S 0:00 \_ [rcuc/12]
277 ? S 0:00 \_ [migration/12]
278 ? S 0:00 \_ [ksoftirqd/12]
280 ? S< 0:00 \_ [kworker/12:0H]
281 ? S 0:00 \_ [watchdog/13]
282 ? S 0:00 \_ [rcuc/13]
283 ? S 0:00 \_ [migration/13]
284 ? S 0:00 \_ [ksoftirqd/13]
286 ? S< 0:00 \_ [kworker/13:0H]
287 ? S 0:00 \_ [watchdog/14]
288 ? S 0:00 \_ [rcuc/14]
289 ? S 0:00 \_ [migration/14]
290 ? S 0:00 \_ [ksoftirqd/14]
292 ? S< 0:00 \_ [kworker/14:0H]
293 ? S 0:00 \_ [watchdog/15]
294 ? S 0:00 \_ [rcuc/15]
295 ? S 0:00 \_ [migration/15]
296 ? S 0:00 \_ [ksoftirqd/15]
298 ? S< 0:00 \_ [kworker/15:0H]
299 ? S 0:00 \_ [watchdog/16]
300 ? S 0:00 \_ [rcuc/16]
301 ? S 0:00 \_ [migration/16]
302 ? S 0:00 \_ [ksoftirqd/16]
304 ? S< 0:00 \_ [kworker/16:0H]
305 ? S 0:00 \_ [rcub/2]
307 ? S 0:00 \_ [watchdog/17]
308 ? S 0:00 \_ [rcuc/17]
309 ? S 0:00 \_ [migration/17]
310 ? S 0:00 \_ [ksoftirqd/17]
312 ? S< 0:00 \_ [kworker/17:0H]
313 ? S 0:00 \_ [watchdog/18]
314 ? S 0:00 \_ [rcuc/18]
315 ? S 0:00 \_ [migration/18]
316 ? S 0:00 \_ [ksoftirqd/18]
317 ? S 0:00 \_ [kworker/18:0]
318 ? S< 0:00 \_ [kworker/18:0H]
319 ? S 0:00 \_ [watchdog/19]
320 ? S 0:00 \_ [rcuc/19]
321 ? S 0:00 \_ [migration/19]
322 ? S 0:00 \_ [ksoftirqd/19]
324 ? S< 0:00 \_ [kworker/19:0H]
325 ? S 0:00 \_ [watchdog/20]
326 ? S 0:00 \_ [rcuc/20]
327 ? S 0:00 \_ [migration/20]
328 ? S 0:00 \_ [ksoftirqd/20]
330 ? S< 0:00 \_ [kworker/20:0H]
331 ? S 0:00 \_ [watchdog/21]
332 ? S 0:00 \_ [rcuc/21]
333 ? S 0:00 \_ [migration/21]
334 ? S 0:00 \_ [ksoftirqd/21]
335 ? S 0:00 \_ [kworker/21:0]
336 ? S< 0:00 \_ [kworker/21:0H]
337 ? S 0:00 \_ [watchdog/22]
338 ? S 0:00 \_ [rcuc/22]
339 ? S 0:00 \_ [migration/22]
340 ? S 0:00 \_ [ksoftirqd/22]
342 ? S< 0:00 \_ [kworker/22:0H]
343 ? S 0:00 \_ [watchdog/23]
344 ? S 0:00 \_ [rcuc/23]
345 ? S 0:00 \_ [migration/23]
346 ? S 0:00 \_ [ksoftirqd/23]
348 ? S< 0:00 \_ [kworker/23:0H]
349 ? S 0:00 \_ [watchdog/24]
350 ? S 0:00 \_ [rcuc/24]
351 ? S 0:00 \_ [migration/24]
352 ? S 0:00 \_ [ksoftirqd/24]
354 ? S< 0:00 \_ [kworker/24:0H]
355 ? S 0:01 \_ [kworker/u132:0]
356 ? S 0:00 \_ [watchdog/25]
357 ? S 0:00 \_ [rcuc/25]
358 ? S 0:00 \_ [migration/25]
359 ? S 0:00 \_ [ksoftirqd/25]
361 ? S< 0:00 \_ [kworker/25:0H]
362 ? S 0:00 \_ [watchdog/26]
363 ? S 0:00 \_ [rcuc/26]
364 ? S 0:00 \_ [migration/26]
365 ? S 0:00 \_ [ksoftirqd/26]
366 ? S 0:00 \_ [kworker/26:0]
367 ? S< 0:00 \_ [kworker/26:0H]
368 ? S 0:00 \_ [watchdog/27]
369 ? S 0:00 \_ [rcuc/27]
370 ? S 0:00 \_ [migration/27]
371 ? S 0:00 \_ [ksoftirqd/27]
373 ? S< 0:00 \_ [kworker/27:0H]
374 ? S 0:00 \_ [watchdog/28]
375 ? S 0:00 \_ [rcuc/28]
376 ? S 0:00 \_ [migration/28]
377 ? S 0:00 \_ [ksoftirqd/28]
379 ? S< 0:00 \_ [kworker/28:0H]
380 ? S 0:00 \_ [watchdog/29]
381 ? S 0:00 \_ [rcuc/29]
382 ? S 0:00 \_ [migration/29]
383 ? S 0:00 \_ [ksoftirqd/29]
385 ? S< 0:00 \_ [kworker/29:0H]
386 ? S 0:00 \_ [watchdog/30]
387 ? S 0:00 \_ [rcuc/30]
388 ? S 0:00 \_ [migration/30]
389 ? S 0:00 \_ [ksoftirqd/30]
391 ? S< 0:00 \_ [kworker/30:0H]
392 ? S 0:00 \_ [watchdog/31]
393 ? S 0:00 \_ [rcuc/31]
394 ? S 0:00 \_ [migration/31]
395 ? S 0:00 \_ [ksoftirqd/31]
397 ? S< 0:00 \_ [kworker/31:0H]
398 ? S 0:00 \_ [watchdog/32]
399 ? S 0:00 \_ [rcuc/32]
400 ? S 0:00 \_ [migration/32]
401 ? S 0:00 \_ [ksoftirqd/32]
402 ? S 0:00 \_ [kworker/32:0]
403 ? S< 0:00 \_ [kworker/32:0H]
404 ? S 0:00 \_ [rcub/3]
405 ? S 0:00 \_ [kworker/u133:0]
406 ? S 0:00 \_ [watchdog/33]
407 ? S 0:00 \_ [rcuc/33]
408 ? S 0:00 \_ [migration/33]
409 ? S 0:00 \_ [ksoftirqd/33]
410 ? S 0:00 \_ [kworker/33:0]
411 ? S< 0:00 \_ [kworker/33:0H]
412 ? S 0:00 \_ [watchdog/34]
413 ? S 0:00 \_ [rcuc/34]
414 ? S 0:00 \_ [migration/34]
415 ? S 0:00 \_ [ksoftirqd/34]
417 ? S< 0:00 \_ [kworker/34:0H]
418 ? S 0:00 \_ [watchdog/35]
419 ? S 0:00 \_ [rcuc/35]
420 ? S 0:00 \_ [migration/35]
421 ? S 0:00 \_ [ksoftirqd/35]
423 ? S< 0:00 \_ [kworker/35:0H]
424 ? S 0:00 \_ [watchdog/36]
425 ? S 0:00 \_ [rcuc/36]
426 ? S 0:00 \_ [migration/36]
427 ? S 0:00 \_ [ksoftirqd/36]
428 ? S 0:00 \_ [kworker/36:0]
429 ? S< 0:00 \_ [kworker/36:0H]
430 ? S 0:00 \_ [watchdog/37]
431 ? S 0:00 \_ [rcuc/37]
432 ? S 0:00 \_ [migration/37]
433 ? S 0:00 \_ [ksoftirqd/37]
434 ? S 0:00 \_ [kworker/37:0]
435 ? S< 0:00 \_ [kworker/37:0H]
436 ? S 0:00 \_ [watchdog/38]
437 ? S 0:00 \_ [rcuc/38]
438 ? S 0:00 \_ [migration/38]
439 ? S 0:00 \_ [ksoftirqd/38]
441 ? S< 0:00 \_ [kworker/38:0H]
442 ? S 0:00 \_ [watchdog/39]
443 ? S 0:00 \_ [rcuc/39]
444 ? S 0:00 \_ [migration/39]
445 ? S 0:00 \_ [ksoftirqd/39]
446 ? S 0:00 \_ [kworker/39:0]
447 ? S< 0:00 \_ [kworker/39:0H]
448 ? S 0:00 \_ [watchdog/40]
449 ? S 0:00 \_ [rcuc/40]
450 ? S 0:00 \_ [migration/40]
451 ? S 0:00 \_ [ksoftirqd/40]
452 ? S 0:00 \_ [kworker/40:0]
453 ? S< 0:00 \_ [kworker/40:0H]
454 ? S 0:00 \_ [kworker/u134:0]
455 ? S 0:00 \_ [watchdog/41]
456 ? S 0:00 \_ [rcuc/41]
457 ? S 0:00 \_ [migration/41]
458 ? S 0:00 \_ [ksoftirqd/41]
460 ? S< 0:00 \_ [kworker/41:0H]
461 ? S 0:00 \_ [watchdog/42]
462 ? S 0:00 \_ [rcuc/42]
463 ? S 0:00 \_ [migration/42]
464 ? S 0:00 \_ [ksoftirqd/42]
465 ? S 0:00 \_ [kworker/42:0]
466 ? S< 0:00 \_ [kworker/42:0H]
467 ? S 0:00 \_ [watchdog/43]
468 ? S 0:00 \_ [rcuc/43]
469 ? S 0:00 \_ [migration/43]
470 ? S 0:00 \_ [ksoftirqd/43]
472 ? S< 0:00 \_ [kworker/43:0H]
473 ? S 0:00 \_ [watchdog/44]
474 ? S 0:00 \_ [rcuc/44]
475 ? S 0:00 \_ [migration/44]
476 ? S 0:00 \_ [ksoftirqd/44]
478 ? S< 0:00 \_ [kworker/44:0H]
479 ? S 0:00 \_ [watchdog/45]
480 ? S 0:00 \_ [rcuc/45]
481 ? S 0:00 \_ [migration/45]
482 ? S 0:00 \_ [ksoftirqd/45]
484 ? S< 0:00 \_ [kworker/45:0H]
485 ? S 0:00 \_ [watchdog/46]
486 ? S 0:00 \_ [rcuc/46]
487 ? S 0:00 \_ [migration/46]
488 ? S 0:00 \_ [ksoftirqd/46]
490 ? S< 0:00 \_ [kworker/46:0H]
491 ? S 0:00 \_ [watchdog/47]
492 ? S 0:00 \_ [rcuc/47]
493 ? S 0:00 \_ [migration/47]
494 ? S 0:00 \_ [ksoftirqd/47]
496 ? S< 0:00 \_ [kworker/47:0H]
497 ? S 0:00 \_ [watchdog/48]
498 ? S 0:00 \_ [rcuc/48]
499 ? S 0:00 \_ [migration/48]
500 ? S 0:00 \_ [ksoftirqd/48]
502 ? S< 0:00 \_ [kworker/48:0H]
503 ? S 0:00 \_ [rcub/4]
504 ? S 0:00 \_ [kworker/u135:0]
505 ? S 0:00 \_ [watchdog/49]
506 ? S 0:00 \_ [rcuc/49]
507 ? S 0:00 \_ [migration/49]
508 ? S 0:00 \_ [ksoftirqd/49]
510 ? S< 0:00 \_ [kworker/49:0H]
511 ? S 0:00 \_ [watchdog/50]
512 ? S 0:00 \_ [rcuc/50]
513 ? S 0:00 \_ [migration/50]
514 ? S 0:00 \_ [ksoftirqd/50]
516 ? S< 0:00 \_ [kworker/50:0H]
517 ? S 0:00 \_ [watchdog/51]
518 ? S 0:00 \_ [rcuc/51]
519 ? S 0:00 \_ [migration/51]
520 ? S 0:00 \_ [ksoftirqd/51]
521 ? S 0:00 \_ [kworker/51:0]
522 ? S< 0:00 \_ [kworker/51:0H]
523 ? S 0:00 \_ [watchdog/52]
524 ? S 0:00 \_ [rcuc/52]
525 ? S 0:00 \_ [migration/52]
526 ? S 0:00 \_ [ksoftirqd/52]
528 ? S< 0:00 \_ [kworker/52:0H]
529 ? S 0:00 \_ [watchdog/53]
530 ? S 0:00 \_ [rcuc/53]
531 ? S 0:00 \_ [migration/53]
532 ? S 0:00 \_ [ksoftirqd/53]
534 ? S< 0:00 \_ [kworker/53:0H]
535 ? S 0:00 \_ [watchdog/54]
536 ? S 0:00 \_ [rcuc/54]
537 ? S 0:00 \_ [migration/54]
538 ? S 0:00 \_ [ksoftirqd/54]
539 ? S 0:00 \_ [kworker/54:0]
540 ? S< 0:00 \_ [kworker/54:0H]
541 ? S 0:00 \_ [watchdog/55]
542 ? S 0:00 \_ [rcuc/55]
543 ? S 0:00 \_ [migration/55]
544 ? S 0:00 \_ [ksoftirqd/55]
546 ? S< 0:00 \_ [kworker/55:0H]
547 ? S 0:00 \_ [watchdog/56]
548 ? S 0:00 \_ [rcuc/56]
549 ? S 0:00 \_ [migration/56]
550 ? S 0:00 \_ [ksoftirqd/56]
552 ? S< 0:00 \_ [kworker/56:0H]
554 ? S 0:00 \_ [watchdog/57]
555 ? S 0:00 \_ [rcuc/57]
556 ? S 0:00 \_ [migration/57]
557 ? S 0:00 \_ [ksoftirqd/57]
559 ? S< 0:00 \_ [kworker/57:0H]
560 ? S 0:00 \_ [watchdog/58]
561 ? S 0:00 \_ [rcuc/58]
562 ? S 0:00 \_ [migration/58]
563 ? S 0:00 \_ [ksoftirqd/58]
565 ? S< 0:00 \_ [kworker/58:0H]
566 ? S 0:00 \_ [watchdog/59]
567 ? S 0:00 \_ [rcuc/59]
568 ? S 0:00 \_ [migration/59]
569 ? S 0:00 \_ [ksoftirqd/59]
570 ? S 0:00 \_ [kworker/59:0]
571 ? S< 0:00 \_ [kworker/59:0H]
572 ? S 0:00 \_ [watchdog/60]
573 ? S 0:00 \_ [rcuc/60]
574 ? S 0:00 \_ [migration/60]
575 ? S 0:00 \_ [ksoftirqd/60]
577 ? S< 0:00 \_ [kworker/60:0H]
578 ? S 0:00 \_ [watchdog/61]
579 ? S 0:00 \_ [rcuc/61]
580 ? S 0:00 \_ [migration/61]
581 ? S 0:00 \_ [ksoftirqd/61]
583 ? S< 0:00 \_ [kworker/61:0H]
584 ? S 0:00 \_ [watchdog/62]
585 ? S 0:00 \_ [rcuc/62]
586 ? S 0:00 \_ [migration/62]
587 ? S 0:00 \_ [ksoftirqd/62]
589 ? S< 0:00 \_ [kworker/62:0H]
590 ? S 0:00 \_ [watchdog/63]
591 ? S 0:00 \_ [rcuc/63]
592 ? S 0:00 \_ [migration/63]
593 ? S 0:00 \_ [ksoftirqd/63]
594 ? S 0:00 \_ [kworker/63:0]
595 ? S< 0:00 \_ [kworker/63:0H]
596 ? S< 0:00 \_ [khelper]
597 ? S 0:00 \_ [kdevtmpfs]
598 ? S 0:00 \_ [kworker/22:1]
600 ? S 0:12 \_ [kworker/20:1]
601 ? S 0:00 \_ [kworker/19:1]
603 ? S 0:00 \_ [kworker/17:1]
604 ? S 0:02 \_ [kworker/16:1]
605 ? S 0:10 \_ [kworker/15:1]
606 ? S 0:10 \_ [kworker/14:1]
607 ? S 0:10 \_ [kworker/13:1]
608 ? S 0:00 \_ [kworker/12:1]
609 ? S 0:10 \_ [kworker/11:1]
611 ? S< 0:00 \_ [netns]
612 ? S< 0:00 \_ [perf]
615 ? S 0:00 \_ [kworker/7:1]
616 ? S 0:00 \_ [kworker/6:1]
617 ? S 0:04 \_ [kworker/5:1]
618 ? S 0:03 \_ [kworker/4:1]
620 ? S< 0:00 \_ [writeback]
621 ? S< 0:00 \_ [kintegrityd]
622 ? S< 0:00 \_ [bioset]
623 ? S< 0:00 \_ [crypto]
624 ? S 0:09 \_ [kworker/2:1]
625 ? S< 0:00 \_ [kblockd]
626 ? S 0:00 \_ [kworker/1:1]
627 ? S< 0:00 \_ [ata_sff]
628 ? S 0:00 \_ [khubd]
629 ? S< 0:00 \_ [md]
630 ? S 0:00 \_ [kworker/0:1]
631 ? S 0:00 \_ [kworker/30:1]
632 ? S 0:00 \_ [khungtaskd]
633 ? S 0:00 \_ [kswapd0]
634 ? S 0:00 \_ [kswapd1]
635 ? S 0:00 \_ [kswapd2]
636 ? S 0:00 \_ [kswapd3]
637 ? S 0:00 \_ [kswapd4]
638 ? S 0:00 \_ [kswapd5]
639 ? S 0:00 \_ [kswapd6]
640 ? S 0:00 \_ [kswapd7]
641 ? SN 0:00 \_ [ksmd]
642 ? SN 0:00 \_ [khugepaged]
643 ? S 0:00 \_ [fsnotify_mark]
650 ? S< 0:00 \_ [kthrotld]
651 ? S 0:00 \_ [kworker/27:1]
652 ? S 0:06 \_ [kworker/28:1]
653 ? S< 0:00 \_ [kpsmoused]
654 ? S 0:00 \_ [kworker/2:2]
655 ? S 0:00 \_ [print/0]
656 ? S 0:00 \_ [print/1]
657 ? S< 0:00 \_ [deferwq]
658 ? S 0:00 \_ [kworker/u128:1]
668 ? S 0:04 \_ [kworker/35:1]
672 ? S 0:00 \_ [kworker/24:1]
679 ? S 0:07 \_ [kworker/38:1]
698 ? S< 0:00 \_ [kmpath_rdacd]
700 ? S< 0:00 \_ [kmpath_aluad]
711 ? S 0:00 \_ [kworker/43:1]
721 ? S 0:01 \_ [kworker/44:1]
724 ? S 0:00 \_ [kworker/29:1]
726 ? S 0:00 \_ [kworker/46:1]
729 ? S 0:00 \_ [kworker/53:1]
730 ? S 0:08 \_ [kworker/52:1]
734 ? S 0:00 \_ [kworker/48:1]
735 ? S 0:00 \_ [kworker/49:1]
737 ? S 0:06 \_ [kworker/25:1]
739 ? S 0:00 \_ [kworker/45:1]
740 ? S 0:00 \_ [kworker/58:1]
751 ? S 0:00 \_ [kworker/41:1]
756 ? S 0:00 \_ [kworker/61:1]
757 ? S 0:00 \_ [kworker/62:1]
770 ? S 0:00 \_ [scsi_eh_0]
771 ? S< 0:00 \_ [scsi_tmf_0]
772 ? S< 0:00 \_ [ttm_swap]
775 ? S 0:09 \_ [kworker/60:1]
777 ? S 0:05 \_ [kworker/31:1]
779 ? S 0:00 \_ [kworker/23:1]
780 ? S 0:00 \_ [kworker/56:1]
781 ? S 0:00 \_ [kworker/57:1]
783 ? S 0:00 \_ [kworker/47:1]
785 ? S 0:07 \_ [kworker/34:1]
786 ? S 0:00 \_ [kworker/50:1]
788 ? S 0:00 \_ [kworker/55:1]
797 ? S 0:11 \_ [kworker/0:2]
799 ? S< 0:00 \_ [kworker/42:1H]
801 ? S< 0:00 \_ [kworker/48:1H]
802 ? S< 0:00 \_ [kworker/28:1H]
803 ? S< 0:00 \_ [kworker/25:1H]
804 ? S< 0:00 \_ [kworker/31:1H]
805 ? S< 0:00 \_ [kworker/30:1H]
845 ? S< 0:00 \_ [kworker/51:1H]
847 ? S 0:00 \_ [kworker/31:2]
853 ? S< 0:00 \_ [kworker/36:1H]
855 ? S 0:07 \_ [kworker/36:2]
872 ? S< 0:00 \_ [kworker/35:1H]
875 ? S< 0:00 \_ [kworker/24:1H]
876 ? S 0:00 \_ [jbd2/sda2-8]
877 ? S< 0:00 \_ [ext4-rsv-conver]
884 ? S 0:07 \_ [kworker/51:2]
885 ? S 0:00 \_ [kworker/u135:1]
892 ? S< 0:00 \_ [kworker/26:1H]
896 ? S< 0:00 \_ [kworker/11:1H]
899 ? S< 0:00 \_ [kworker/12:1H]
900 ? S< 0:00 \_ [kworker/60:1H]
903 ? S 0:00 \_ [kworker/u133:1]
915 ? S< 0:00 \_ [kworker/59:1H]
916 ? S< 0:00 \_ [kworker/52:1H]
917 ? S 0:00 \_ [kworker/52:2]
918 ? S< 0:00 \_ [kworker/38:1H]
919 ? S< 0:00 \_ [kworker/39:1H]
927 ? S< 0:00 \_ [kworker/14:1H]
929 ? S 0:00 \_ [kauditd]
931 ? S 0:00 \_ [kworker/11:2]
935 ? S< 0:00 \_ [kworker/32:1H]
938 ? S 0:00 \_ [kworker/28:2]
941 ? S 0:00 \_ [kworker/13:2]
944 ? S 0:00 \_ [kworker/u130:1]
946 ? S< 0:00 \_ [kworker/8:1H]
948 ? S 0:00 \_ [kworker/14:2]
955 ? S< 0:00 \_ [kworker/10:1H]
958 ? S< 0:00 \_ [kworker/34:1H]
960 ? S 0:00 \_ [kworker/60:2]
963 ? S< 0:00 \_ [kworker/43:1H]
989 ? S< 0:00 \_ [kworker/37:1H]
1015 ? S< 0:00 \_ [kworker/18:1H]
1019 ? S 0:07 \_ [kworker/30:2]
1047 ? S< 0:00 \_ [kworker/19:1H]
1068 ? SN 0:00 \_ [kipmi0]
1104 ? S< 0:00 \_ [kworker/27:1H]
1106 ? S< 0:00 \_ [edac-poller]
1143 ? S< 0:00 \_ [kworker/15:1H]
1152 ? S< 0:00 \_ [kworker/13:1H]
1153 ? S< 0:00 \_ [kvm-irqfd-clean]
1154 ? S< 0:00 \_ [kworker/29:1H]
1193 ? S< 0:00 \_ [kworker/0:1H]
1280 ? S 0:00 \_ [kworker/15:2]
1298 ? S< 0:00 \_ [kworker/9:1H]
1300 ? S 0:00 \_ [kworker/35:2]
1310 ? S 0:00 \_ [jbd2/sda3-8]
1311 ? S< 0:00 \_ [ext4-rsv-conver]
1313 ? S 0:00 \_ [jbd2/sda7-8]
1314 ? S< 0:00 \_ [ext4-rsv-conver]
1315 ? S< 0:00 \_ [kworker/40:1H]
1316 ? S< 0:00 \_ [kworker/44:1H]
1318 ? S 0:00 \_ [jbd2/sda5-8]
1319 ? S< 0:00 \_ [ext4-rsv-conver]
1329 ? S 0:00 \_ [jbd2/sda6-8]
1330 ? S< 0:00 \_ [ext4-rsv-conver]
1332 ? S< 0:00 \_ [kworker/46:1H]
1333 ? S 0:12 \_ [kworker/46:2]
1340 ? S< 0:00 \_ [kworker/4:1H]
1348 ? S 0:00 \_ [kworker/38:2]
1549 ? S 0:00 \_ [kworker/20:2]
1556 ? S< 0:00 \_ [kworker/20:1H]
1558 ? S< 0:00 \_ [kworker/16:1H]
1565 ? S< 0:00 \_ [kworker/22:1H]
1917 ? S< 0:00 \_ [kworker/62:1H]
4007 ? S< 0:00 \_ [rpciod]
4011 ? S< 0:00 \_ [nfsiod]
4030 ? S 0:00 \_ [nfsv4.0-svc]
4035 ? S 0:01 \_ [kworker/u129:2]
4038 ? S 0:00 \_ [kworker/34:2]
4287 ? S< 0:00 \_ [kworker/54:1H]
4296 ? S< 0:00 \_ [kworker/56:1H]
4339 ? S 0:00 \_ [kworker/25:2]
4340 ? S< 0:00 \_ [kworker/2:1H]
4448 ? S 0:00 \_ [kworker/u136:1]
4449 ? S 0:00 \_ [kworker/u136:2]
4454 ? S 0:01 \_ [kworker/u132:3]
4455 ? S 0:07 \_ [kworker/61:2]
4456 ? S 0:10 \_ [kworker/9:2]
4458 ? S 0:07 \_ [kworker/53:2]
4460 ? S 0:12 \_ [kworker/23:2]
4461 ? S 0:07 \_ [kworker/63:2]
4462 ? S 0:08 \_ [kworker/56:2]
4463 ? S 0:12 \_ [kworker/43:2]
4464 ? S 0:12 \_ [kworker/19:2]
4465 ? S 0:10 \_ [kworker/8:2]
4466 ? S 0:07 \_ [kworker/59:2]
4467 ? S 0:12 \_ [kworker/47:2]
4468 ? S 0:07 \_ [kworker/24:2]
4469 ? S 0:08 \_ [kworker/54:2]
4470 ? S 0:12 \_ [kworker/40:2]
4471 ? S 0:12 \_ [kworker/22:2]
4472 ? S 0:12 \_ [kworker/45:2]
4473 ? S 0:08 \_ [kworker/50:2]
4474 ? S 0:12 \_ [kworker/17:2]
4475 ? S 0:04 \_ [kworker/5:2]
4476 ? S 0:08 \_ [kworker/62:2]
4477 ? S 0:10 \_ [kworker/12:2]
4478 ? S 0:06 \_ [kworker/7:2]
4479 ? S 0:12 \_ [kworker/41:2]
4480 ? S 0:08 \_ [kworker/48:2]
4481 ? S 0:09 \_ [kworker/1:2]
4482 ? S 0:06 \_ [kworker/4:2]
4483 ? S 0:07 \_ [kworker/26:2]
4484 ? S 0:09 \_ [kworker/10:2]
4485 ? S 0:12 \_ [kworker/42:2]
4486 ? S 0:06 \_ [kworker/27:2]
4487 ? S 0:12 \_ [kworker/21:2]
4488 ? S 0:08 \_ [kworker/6:2]
4489 ? S 0:04 \_ [kworker/37:2]
4490 ? S 0:07 \_ [kworker/57:2]
4491 ? S 0:07 \_ [kworker/55:2]
4492 ? S 0:11 \_ [kworker/18:2]
4493 ? S 0:08 \_ [kworker/3:2]
4494 ? S 0:06 \_ [kworker/32:2]
4497 ? S 0:07 \_ [kworker/58:2]
4498 ? S 0:04 \_ [kworker/39:2]
4499 ? S 0:06 \_ [kworker/49:2]
4500 ? S 0:11 \_ [kworker/44:2]
4501 ? S 0:03 \_ [kworker/33:2]
4502 ? S 0:04 \_ [kworker/29:2]
4541 ? S 0:09 \_ [kworker/16:2]
4542 ? S 0:00 \_ [kworker/u131:1]
4544 ? S 0:00 \_ [kworker/u131:3]
1 ? Ss 0:03 /sbin/init showopts
920 ? Ss 0:00 /usr/lib/systemd/systemd-journald
961 ? Ss 0:02 /usr/lib/systemd/systemd-udevd
1518 ? Ss 0:00 /bin/dbus-daemon --system --address=systemd:
--nofork --nopidfile --systemd-activation
1521 ? Ss 0:00 avahi-daemon: running [exec-node01.local]
1522 ? Ssl 0:00 /usr/sbin/nscd --foreground
1523 ? Ss 0:00 /usr/sbin/wpa_supplicant -c
/etc/wpa_supplicant/wpa_supplicant.conf -u -f /var/log/wpa_supplicant.log
1525 ? Ssl 0:00 /usr/sbin/ModemManager
1526 ? Ss 0:00 /sbin/rpcbind -w -f
1528 ? Ss 0:00 /usr/lib/systemd/systemd-logind
1547 ? Ssl 0:00 /usr/lib/polkit-1/polkitd --no-debug
1550 ? Ssl 0:00 /usr/sbin/rsyslogd -n
1566 ? Ss 0:00 /usr/sbin/sssd -D -f
1576 ? S 0:00 \_ /usr/lib/sssd/sssd_be --domain default
--debug-to-files
1601 ? S 0:00 \_ /usr/lib/sssd/sssd_nss --debug-to-files
1602 ? S 0:00 \_ /usr/lib/sssd/sssd_pam --debug-to-files
2384 ? S 0:00 /sbin/dhclient6 -6 -cf
/var/lib/dhcp6/dhclient6.em1.conf -lf /var/lib/dhcp6/dhclient6.em1.lease -pf
/var/run/dhclient6.em1.pid -q em1
2949 ? S 0:00 avahi-autoipd: [em1] sleeping
2950 ? S 0:00 \_ avahi-autoipd: [em1] callout dispatcher
3012 ? Ss 0:00 /sbin/dhcpcd --netconfig -L -E -HHH -c
/etc/sysconfig/network/scripts/dhcpcd-hook -t 0 -h exec-node01 em1
4001 ? Ss 0:00 /usr/sbin/sshd -D
4266 ? Ss 0:00 \_ sshd: root@pts/0
4270 pts/0 Ss 0:00 \_ -bash
4565 pts/0 R+ 0:00 \_ ps -e f
4063 ? Ss 0:00 /usr/sbin/ntpd -p /var/run/ntp/ntpd.pid -g -u
ntp:ntp -c /etc/ntp.conf
4099 tty1 Ss+ 0:00 /sbin/agetty --noclear tty1 linux
4224 ? Ss 0:00 /usr/lib/postfix/master
4240 ? S 0:00 \_ pickup -l -t fifo -u
4241 ? S 0:00 \_ qmgr -l -t fifo -u
4239 ? Ss 0:00 /usr/sbin/cron -n
4243 ? Dl 0:05 /opt/sge/bin/lx-amd64/sge_execd
4341 ? S 0:00 \_ sge_shepherd-281 -bg
4342 ? Ss 0:00 \_ -bash
/opt/sge/default/spool/exec-node01/job_scripts/281
4379 ? Dl 0:37 \_ mpiexec mpihello
4381 ? Sl 0:00 \_ /opt/sge/bin/lx-amd64/qrsh -inherit
-nostdin -V exec-node02
PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/bin:$PATH ; export PATH ;
LD_LIBRARY_PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/lib64:$LD_LIBRARY_PATH
; export LD_LIBRARY_PATH ;
DYLD_LIBRARY_PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/lib64:$DYLD_LIBRARY_PATH
; export DYLD_LIBRARY_PATH ;
/usr/local/misc/openmpi/openmpi-1.10.3/build/bin/orted --hnp-topo-sig
8N:4S:8L3:32L2:64L1:64C:64H:x86_64 -mca ess "env" -mca orte_ess_jobid
"878379008" -mca orte_ess_vpid 1 -mca orte_ess_num_procs "8" -mca orte_hnp_uri
"878379008.0;tcp://192.168.117.1:54706" -mca plm "rsh" --tree-spawn
4382 ? Sl 0:00 \_ /opt/sge/bin/lx-amd64/qrsh -inherit
-nostdin -V exec-node05
PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/bin:$PATH ; export PATH ;
LD_LIBRARY_PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/lib64:$LD_LIBRARY_PATH
; export LD_LIBRARY_PATH ;
DYLD_LIBRARY_PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/lib64:$DYLD_LIBRARY_PATH
; export DYLD_LIBRARY_PATH ;
/usr/local/misc/openmpi/openmpi-1.10.3/build/bin/orted --hnp-topo-sig
8N:4S:8L3:32L2:64L1:64C:64H:x86_64 -mca ess "env" -mca orte_ess_jobid
"878379008" -mca orte_ess_vpid 2 -mca orte_ess_num_procs "8" -mca orte_hnp_uri
"878379008.0;tcp://192.168.117.1:54706" -mca plm "rsh" --tree-spawn
4383 ? Sl 0:00 \_ /opt/sge/bin/lx-amd64/qrsh -inherit
-nostdin -V exec-node03
PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/bin:$PATH ; export PATH ;
LD_LIBRARY_PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/lib64:$LD_LIBRARY_PATH
; export LD_LIBRARY_PATH ;
DYLD_LIBRARY_PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/lib64:$DYLD_LIBRARY_PATH
; export DYLD_LIBRARY_PATH ;
/usr/local/misc/openmpi/openmpi-1.10.3/build/bin/orted --hnp-topo-sig
8N:4S:8L3:32L2:64L1:64C:64H:x86_64 -mca ess "env" -mca orte_ess_jobid
"878379008" -mca orte_ess_vpid 3 -mca orte_ess_num_procs "8" -mca orte_hnp_uri
"878379008.0;tcp://192.168.117.1:54706" -mca plm "rsh" --tree-spawn
4384 ? Sl 0:00 \_ /opt/sge/bin/lx-amd64/qrsh -inherit
-nostdin -V exec-node04
PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/bin:$PATH ; export PATH ;
LD_LIBRARY_PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/lib64:$LD_LIBRARY_PATH
; export LD_LIBRARY_PATH ;
DYLD_LIBRARY_PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/lib64:$DYLD_LIBRARY_PATH
; export DYLD_LIBRARY_PATH ;
/usr/local/misc/openmpi/openmpi-1.10.3/build/bin/orted --hnp-topo-sig
8N:4S:8L3:32L2:64L1:64C:64H:x86_64 -mca ess "env" -mca orte_ess_jobid
"878379008" -mca orte_ess_vpid 4 -mca orte_ess_num_procs "8" -mca orte_hnp_uri
"878379008.0;tcp://192.168.117.1:54706" -mca plm "rsh" --tree-spawn
4385 ? Sl 0:00 \_ /opt/sge/bin/lx-amd64/qrsh -inherit
-nostdin -V exec-node06
PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/bin:$PATH ; export PATH ;
LD_LIBRARY_PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/lib64:$LD_LIBRARY_PATH
; export LD_LIBRARY_PATH ;
DYLD_LIBRARY_PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/lib64:$DYLD_LIBRARY_PATH
; export DYLD_LIBRARY_PATH ;
/usr/local/misc/openmpi/openmpi-1.10.3/build/bin/orted --hnp-topo-sig
8N:4S:8L3:32L2:64L1:64C:64H:x86_64 -mca ess "env" -mca orte_ess_jobid
"878379008" -mca orte_ess_vpid 5 -mca orte_ess_num_procs "8" -mca orte_hnp_uri
"878379008.0;tcp://192.168.117.1:54706" -mca plm "rsh" --tree-spawn
4386 ? Sl 0:00 \_ /opt/sge/bin/lx-amd64/qrsh -inherit
-nostdin -V exec-node08
PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/bin:$PATH ; export PATH ;
LD_LIBRARY_PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/lib64:$LD_LIBRARY_PATH
; export LD_LIBRARY_PATH ;
DYLD_LIBRARY_PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/lib64:$DYLD_LIBRARY_PATH
; export DYLD_LIBRARY_PATH ;
/usr/local/misc/openmpi/openmpi-1.10.3/build/bin/orted --hnp-topo-sig
8N:4S:8L3:32L2:64L1:64C:64H:x86_64 -mca ess "env" -mca orte_ess_jobid
"878379008" -mca orte_ess_vpid 6 -mca orte_ess_num_procs "8" -mca orte_hnp_uri
"878379008.0;tcp://192.168.117.1:54706" -mca plm "rsh" --tree-spawn
4387 ? Sl 0:00 \_ /opt/sge/bin/lx-amd64/qrsh -inherit
-nostdin -V exec-node07
PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/bin:$PATH ; export PATH ;
LD_LIBRARY_PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/lib64:$LD_LIBRARY_PATH
; export LD_LIBRARY_PATH ;
DYLD_LIBRARY_PATH=/usr/local/misc/openmpi/openmpi-1.10.3/build/lib64:$DYLD_LIBRARY_PATH
; export DYLD_LIBRARY_PATH ;
/usr/local/misc/openmpi/openmpi-1.10.3/build/bin/orted --hnp-topo-sig
8N:4S:8L3:32L2:64L1:64C:64H:x86_64 -mca ess "env" -mca orte_ess_jobid
"878379008" -mca orte_ess_vpid 7 -mca orte_ess_num_procs "8" -mca orte_hnp_uri
"878379008.0;tcp://192.168.117.1:54706" -mca plm "rsh" --tree-spawn
4405 ? S 0:16 \_ mpihello
4406 ? S 0:16 \_ mpihello
4407 ? S 0:16 \_ mpihello
4408 ? S 0:15 \_ mpihello
4409 ? S 0:16 \_ mpihello
4410 ? S 0:16 \_ mpihello
4411 ? S 0:16 \_ mpihello
4412 ? S 0:14 \_ mpihello
4413 ? S 0:16 \_ mpihello
4414 ? S 0:16 \_ mpihello
4415 ? S 0:16 \_ mpihello
4416 ? S 0:14 \_ mpihello
4417 ? S 0:15 \_ mpihello
4418 ? S 0:14 \_ mpihello
4419 ? S 0:13 \_ mpihello
4420 ? S 0:15 \_ mpihello
4421 ? S 0:16 \_ mpihello
4422 ? S 0:14 \_ mpihello
4423 ? S 0:16 \_ mpihello
4424 ? S 0:14 \_ mpihello
4425 ? S 0:16 \_ mpihello
4426 ? S 0:14 \_ mpihello
4427 ? S 0:16 \_ mpihello
4428 ? S 0:14 \_ mpihello
4429 ? S 0:15 \_ mpihello
4430 ? S 0:16 \_ mpihello
4431 ? S 0:13 \_ mpihello
4432 ? S 0:14 \_ mpihello
4433 ? S 0:16 \_ mpihello
4434 ? S 0:16 \_ mpihello
4435 ? S 0:13 \_ mpihello
4436 ? S 0:14 \_ mpihello
4437 ? S 0:16 \_ mpihello
4438 ? S 0:14 \_ mpihello
4439 ? S 0:16 \_ mpihello
4440 ? S 0:15 \_ mpihello
4441 ? S 0:15 \_ mpihello
4442 ? S 0:16 \_ mpihello
4443 ? S 0:13 \_ mpihello
4444 ? S 0:15 \_ mpihello
4445 ? S 0:16 \_ mpihello
4446 ? S 0:16 \_ mpihello
4447 ? S 0:16 \_ mpihello
4268 ? Ss 0:00 /usr/lib/systemd/systemd --user
4269 ? S 0:00 \_ (sd-pam)
job-ID prior name user state submit/start at queue
master ja-task-ID
------------------------------------------------------------------------------------------------------------------
281 0.55500 STDIN ulrich r 08/15/2016 18:13:19
all.q@exec-node01 MASTER
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
all.q@exec-node01 SLAVE
281 0.55500 STDIN ulrich r 08/15/2016 18:13:19
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
all.q@exec-node02 SLAVE
281 0.55500 STDIN ulrich r 08/15/2016 18:13:19
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
all.q@exec-node03 SLAVE
281 0.55500 STDIN ulrich r 08/15/2016 18:13:19
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
all.q@exec-node04 SLAVE
281 0.55500 STDIN ulrich r 08/15/2016 18:13:19
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
all.q@exec-node05 SLAVE
281 0.55500 STDIN ulrich r 08/15/2016 18:13:19
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
all.q@exec-node06 SLAVE
281 0.55500 STDIN ulrich r 08/15/2016 18:13:19
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
all.q@exec-node07 SLAVE
281 0.55500 STDIN ulrich r 08/15/2016 18:13:19
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
all.q@exec-node08 SLAVE
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users