Re: [Wien] Can someone share the MPI parallel script using on LSF job management system with me ?

2019-06-03 Thread Gavin Abo

Yes, for both k-point parallel and mpi parallel you need "-p".

Do not use "mpirun run_lapw ." in your job script.?0?2 Use "run_lapw -p 
." which will itself run the "mpirun" or another mpi launcher that 
you have set in siteconfig.


For mpi parallel [ 
https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg00985.html 
, http://www.wien2k.at/reg_user/faq/ecss_hliu_051012.pdf (slide 7)], you 
need to change your job script to output for example:


granularity:1
lapw0:c021:28
1:c021:28
extrafine:1

You might try the LSF job script or program your script based on the 
script in the post at:


https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg01612.html


On 6/3/2019 2:45 AM, ?? wrote:

Dear wien2k experts,

I am using version 18.2 of wien2k on computer cluster of our group 
(LSF job?0?2management), the installation of wien2k is OK (including 
the?0?2fine grained parallelization) because of no error message in 
compile.msg. Now, I can run K-point parallel, I am lack of 
the?0?2knowledge about shell and?0?2the script is asking from good-hearted 
person.


The "on the fly" .machines file is generated by the following in the 
script:


#make .machines file

echo'granularity:1'>.machines

echo"lapw0:"`echo$LSB_HOSTS|cut -d""-f1`>>.machines

fori in`echo$LSB_HOSTS`

do

echo"1:"$i>>.machines

done

echo'extrafine:1'>>.machines


As a?0?2example, if I use one node (28 core),?0?2the .machines file is 
as?0?2following:



granularity:1

lapw0:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

1:c021

extrafine:1


The K-point parallel is OK.


Now I have to use the MPI parallel, I can not find the?0?2corresponding 
script using on LSF?0?2job?0?2management system. After reading?0?2the 
5.5?0?2chapter of UG several times and?0?2the GAQ page 
(http://susi.theochem.tuwien.ac.at/reg_user/faq/pbs.html), I was 
really confused. I don't?0?2know what the content should be in 
.machines?0?2files, not to mention the script. Unluckily, I get no help 
from the cluster engineer.



Does the command "mpirun run_lapw ." ?0?2launch the job? 
?0?2Should?0?2the?0?2"-p" option be used ?



I believe that there is no big?0?2differences on same job?0?2management 
system. I?0?2can't?0?2find the?0?2script using on LSF?0?2job?0?2management system?0?2in 
mail-listing and I know that prof.?0?2Peter Blaha is bored with similar 
questions.



Here, I just want someone can share MPI parallel?0?2script 
on?0?2LSF?0?2job?0?2management system with me, I highly?0?2appreciated that.



Looking forward to your reply.


regards,

Min Lin,
phd student in China, Xiamen university, department of chemistry.
E-mail address: 2236673...@qq.com
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] Can someone share the MPI parallel script using on LSF job management system with me ?

2019-06-03 Thread ??????
Dear wien2k experts,


I am using version 18.2 of wien2k on computer cluster of our group (LSF job 
management), the installation of wien2k is OK (including the fine grained 
parallelization) because of no error message in compile.msg. Now, I can run 
K-point parallel, I am lack of the knowledge about shell and the script is 
asking from good-hearted person.


The "on the fly" .machines file is generated by the following in the script:



#make .machines file
 
echo 'granularity:1' >.machines
 
echo "lapw0:"`echo $LSB_HOSTS |cut -d" " -f1` >> .machines
 
for i in `echo $LSB_HOSTS`
 
do
 
 echo "1:"$i >> .machines
 
done
 
echo 'extrafine:1' >>.machines




As a example, if I use one node (28 core), the .machines file is as following:




granularity:1
 
lapw0:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
1:c021
 
extrafine:1




The K-point parallel is OK.




Now I have to use the MPI parallel, I can not find the corresponding script 
using on LSF job management system. After reading the 5.5 chapter of UG several 
times and the GAQ page 
(http://susi.theochem.tuwien.ac.at/reg_user/faq/pbs.html), I was really 
confused. I don't know what the content should be in .machines files, not to 
mention the script. Unluckily, I get no help from the cluster engineer.




Does the command "mpirun run_lapw ."  launch the job?  Should the "-p" 
option be used ?




I believe that there is no big differences on same job management system. I 
can't find the script using on LSF job management system in mail-listing and I 
know that prof. Peter Blaha is bored with similar questions.




Here, I just want someone can share MPI parallel script on LSF job management 
system with me, I highly appreciated that. 




Looking forward to your reply.



regards,


Min Lin, 
phd student in China, Xiamen university, department of chemistry.
E-mail address: 2236673...@qq.com___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html