To not use OpenMP, set omp_global:1. That is according to:
https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg18967.html
Regarding which of the .machines files, you may want to check the
mailing list archive to learn about the .machines file syntax difference
between k-point
Dear Members,
Version 23.2
I want to do a calculation with only 1 k point (isolated system) using MPI
parallelisation (Not OPENMPI)
This is my first time doing an MPI parallelisation. Hence, I am highly confused
after going through the guidebook and the ppts.
Lscpu gives me the information.
As Peter said, try running on just 4 cores. There are some things that can
go wrong if you use many mpi for small problems.
Beyond that, please execute "ulimit -a" at a terminal. It is also good to
run it remotely in a job. I want to find out whether you have no rights to
set limits but they are
Try to run the lapw0_mpi on 4 cores only.
Am 3/23/22 um 11:48 schrieb venky ch:
Dear Prof. Marks and Prof. Blaha,
Thanks for your quick responses. The answers are as follows,
a) Is this a supercomputer, a lab cluster or your cluster?
Ans: It is a supercomputer
b) Did you set it up or did
Dear Prof. Marks and Prof. Blaha,
Thanks for your quick responses. The answers are as follows,
a) Is this a supercomputer, a lab cluster or your cluster?
Ans: It is a supercomputer
b) Did you set it up or did someone else?
Ans: I haven't set up these ulimits.
c) Do you have root/su rights?
What case is it, that you run it on 32 cores ? How many atoms ??
Remember: more cores does not always mean faster, in fact it could
also mean crash or MUCH slower
Please read the parallelization section of the UG.
Am 23.03.2022 um 09:31 schrieb venky ch:
Dear Wien2k users,
I
There are many things wrong, but let's start with the critical one --
ulimit.
a) Is this a supercomputer, a lab cluster or your cluster?
b) Did you set it up or did someone else?
c) Do you have root/su rights?
Someone has set limits in such a way that it is interfering with the
calculations. It
Dear Wien2k users,
I have successfully installed the wien2k.21 version in the HPC cluster.
However, while running a test calculation, I am getting the following error
so that the lapw0_mpi crashed.
=
/home/proj/21/phyvech/.bashrc: line 43: ulimit: stack size: cannot modify
limit:
Definitely, it has nothing to do with MPI_REMOTE.
apparently, your ssh setup does not transfer the "environment".
There are 2 solutions:
If you want to run k-parallel only on one node, simply put
USE_REMOTE=0 in parallel_options. However, with this you are not able
to use more nodes in one
As before, command not found is a PATH problem, nothing to do with Wien2k.
Why do you want many versions? One version can run many jobs in parallel.
_
Professor Laurence Marks
"Research is to see what everybody else has seen, and to think what nobody
else has thought", Albert Szent-Györgyi
Dear all WIEN2k users,
>The recommended option for mpi version 2 (all modern mpis) is to set
MPI_REMOTE to zero. The mpirun command will be issued on the original
node, but the lapw1_mpi executables will run as given in .machines.
>This should solve your problem.
Now mpi and k-point parallel
Dear all Wien2k users,
It is a pleasure to report that at last my problem is solved.
Here, I would like to express my gratitude to Peter Blaha, Laurence Marks,
Gavin Abo and Fecher Gerhard for all their very nice and valuable comments
and helpful links.
Sincerely yours,
Leila
On Sat, May 29,
Confusius say "teach a woman to fish..."
Please read the pages I sent, and search for other ones.
---
Prof Laurence Marks
"Research is to see what everyone else has seen, and to think what nobody
else has thought", Albert Szent-Györgyi
www.numis.northwestern.edu
On Sat, May 29, 2021, 09:37
The difference beteen lapw0para and lapw1para is that
lapw0para always executes mpirun on the original node, lapw1para maybe not.
The behavior of lapw1para depends on MPI_REMOTE (set in
WIEN2k_parallel_options in w2k21.1 (or parallel_options earlier).
With MPI_REMOTE=1 it will first issue a
As we have told you before, mpirun is a command on your system, it is not
part of Wien2k. Your problem is because you have something wrong in what is
defined (probably) for your PATH variable, and/or how this is exported --
my guess.
I suggest you read
Dear all wien2k users,
Following the previous comment referring me to the admin, I contacted the
cluster admin. By the comment of the admin, I recompiled Wien2k
successfully using the cluster modules.
>Once the blacs problem has been fixed,
For example, is the following correct?
As Laurence Marks mentioned, looking into all files you proposed, would
cost a couple of hours. You have to check these files yourself and solve
the problem or at least extract the most important information.
A few remarks:
> You need to link with the blacs library for openmpi.
I
You MUST read your files yourself FIRST, and use some logic.
The files for lapw0/lapw1 you include indicate that you used ifort/icc for
the non-parallel parts. However, the parallel parts uses mpif90 which on
your system points to gfortran. You need to use the parallel version of
ifort, which is
Dear all wien2k users,
Thankyou for your reply and guides.
> You need to link with the blacs library for openmpi.
I unsuccessfully recompiled wien2k by linking with the blacs library for
openmpias “mkl_blacs_openmpi_lp64” due to gfortran errors. The video of
this recompile is uploaded to a
Peter beat me to the response -- please do as he says and move stepwise
forward, posting single steps if they fail.
On Thu, May 6, 2021 at 10:38 AM Peter Blaha
wrote:
> Once the blacs problem has been fixed, the next step is to run lapw0 in
> sequential and parallel mode.
>
> Add:
>
> x lapw0
Once the blacs problem has been fixed, the next step is to run lapw0 in
sequential and parallel mode.
Add:
x lapw0 and check the case.output0 and case.scf0 files (copy them to
a different name) as well as the message from the queuing system.
add: mpirun -np 4 $WIENROOT/lapw0_mpi
One thing is clear:
lapw1_mpi cannot work.
You are linking with-lmkl_blacs_intelmpi_lp64
but you are using openmpi.
You need to link with the blacs library for openmpi.
It is mentioned in the usersguide.
Am 06.05.2021 um 15:09 schrieb leila mollabashi:
Dear all wien2k users,
>I
Dear all wien2k users,
>I suggest that you focus on the PATH first, using
I followed your suggestion. The script and results are in the
https://files.fm/u/m2qak574g. The compile.msg_lapw0 and compile.msg_lapw0
are in the https://files.fm/u/pvdn52zpw .
Sincerely yours,
Leila
On Wed, May 5,
I think we (collectively) may be confusing things by offering too much
advice!
Let's keep it simple, and focus on one thing at a time. The "mpirun not
found" has nothing to do with compilers. It is 100% due to your not having
the PATH variable set right. This is not fftw, but probably in the
Three additional comments:
1) If you are running the slurm.job script as Non-Interactive [1,2],
you might need a "source /etc/profile.d/ummodules.csh" line like that at
[3].
[1] https://slurm.schedmd.com/faq.html#sbatch_srun
[2]
For certain, "/opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/bin/mpiexec
/home/users/mollabashi/codes/v21.1/run_lapw -p" is completely wrong. You do
not, repear do not use mpirun or mpiexec to start run_lapw. It has to be
started by simply "run_lapw -p ..." by itself.
I suggest that you create
ers Guide to the Galaxy:
> "I think the problem, to be quite honest with you,
> is that you have never actually known what the question is."
>
>
> Dr. Gerhard H. Fecher
> Institut of Physics
> Johannes Gutenberg - University
> 55099 Mai
An: A Mailing list for WIEN2k users
Betreff: Re: [Wien] MPI error
Thank you.
On Mon, May 3, 2021, 3:04 AM Laurence Marks
mailto:laurence.ma...@gmail.com>> wrote:
You have to solve the "mpirun not found". That is due to your path/nfs/module
-- we do not know.
---
Prof La
ssuming that you used gcc
>>>>
>>>> Yes.
>>>>
>>>> >For certain you cannot run lapw2 without first running lapw1.
>>>>
>>>> Yes. You are right. When x lapw1 –p has not executed I have changed the
>>>> .machines file and
es?
>>>
>>> Yes and I also checked compile.msg in SRC_lapw1
>>>
>>> Sincerely yours,
>>>
>>> Leila
>>>
>>>
>>> On Mon, May 3, 2021 at 12:42 AM Fecher, Gerhard
>>> wrote:
>>>
>>>> I guess th
;
>> Sincerely yours,
>>
>> Leila
>>
>>
>> On Mon, May 3, 2021 at 12:42 AM Fecher, Gerhard
>> wrote:
>>
>>> I guess that module does not work with tcsh
>>>
>>> Ciao
>>> Gerhard
>>>
>>> DEEP THOUGHT in D. Adams; Hitchhikers Guide
>>
>> Dr. Gerhard H. Fecher
>> Institut of Physics
>> Johannes Gutenberg - University
>> 55099 Mainz
>>
>> Von: Wien [wien-boun...@zeus.theochem.tuwien.ac.at] im Auftrag von
>> Laure
t; is that you have never actually known what the question is."
>>
>>
>> Dr. Gerhard H. Fecher
>> Institut of Physics
>> Johannes Gutenberg - University
>> 55099 Mainz
>>
>&g
___
> Von: Wien [wien-boun...@zeus.theochem.tuwien.ac.at] im Auftrag von
> Laurence Marks [laurence.ma...@gmail.com]
> Gesendet: Sonntag, 2. Mai 2021 21:32
> An: A Mailing list for WIEN2k users
> Betreff: Re: [Wien] MPI error
>
> Inlined response and questions
>
> O
H. Fecher
Institut of Physics
Johannes Gutenberg - University
55099 Mainz
Von: Wien [wien-boun...@zeus.theochem.tuwien.ac.at] im Auftrag von Laurence
Marks [laurence.ma...@gmail.com]
Gesendet: Sonntag, 2. Mai 2021 21:32
An: A Mailing list for WIEN2k users
Betreff:
Inlined response and questions
On Sun, May 2, 2021 at 2:19 PM leila mollabashi
wrote:
> Dear Prof. Peter Blaha and WIEN2k users,
>
> Now I have loaded the openmpi/4.1.0 and compiled Wine2k. The admin told me
> that I can use your script in >http://www.wien2k.at/reg_user/faq/slurm.job
>
Dear Prof. Peter Blaha and WIEN2k users,
Now I have loaded the openmpi/4.1.0 and compiled Wine2k. The admin told me
that I can use your script in >http://www.wien2k.at/reg_user/faq/slurm.job
. I added this lines to it too:
module load openmpi/4.1.0_gcc620
module load ifort
module load mkl
but
Dear Prof. Peter Blaha and WIEN2k users,
Now I have loaded the openmpi/4.1.0 and compiled Wine2k. The admin told me
that I can use your script in >http://www.wien2k.at/reg_user/faq/slurm.job
. I added this lines to it too:
module load openmpi/4.1.0_gcc620
module load ifort
module load mkl
but
Recompile with LI, since mpirun is supported (after loading the proper mpi).
PS: Ask them if -np and -machinefile is still possible to use. Otherwise
you cannot mix k-parallel and mpi parallel and for sure, for smaller
cases it is a severe limitation to have only ONE mpi job with many
Dear Prof. Peter Blaha and WIEN2k users,
Thank you for your assistances.
Here it is the admin reply:
- mpirun/mpiexec command is supported after loadin propper module ( I
suggest openmpi/4.1.0 with gcc 6.2.0 or icc )
- you have to describe needed resources (I suggest : --nodes and
It cannot initialize an mpi job, because it is missing the interface
software.
You need to ask the computing center / system administrators how one
executes a mpi job on this computer.
It could be, that "mpirun" is not supported on this machine. You may try
a wien2k installation with
Dear Prof. Peter Blaha and WIEN2k users,
Then by run x lapw1 –p:
starting parallel lapw1 at Tue Apr 13 21:04:15 CEST 2021
-> starting parallel LAPW1 jobs at Tue Apr 13 21:04:15 CEST 2021
running LAPW1 in parallel mode (using .machines)
2 number_of_parallel_jobs
[1] 14530
[e0467:14538]
Dear Prof. Peter Blaha and WIEN2k users,
Thank you for your assistances.
> At least now the error: "lapw0 not found" is gone. Do you understand why
??
Yes, I think that because now the path is clearly known.
>How many slots do you get by this srun command ?
Usually I went to node with 28
Am 12.04.2021 um 20:00 schrieb leila mollabashi:
Dear Prof. Peter Blaha and WIEN2k users,
Thank you. Now my .machines file is:
lapw0:e0591:4
1:e0591:4
1:e0591:4
granularity:1
extrafine:1
I have installed WIEN2k in my user in the cluster. When I use this
script “srun --pty /bin/bash”
Dear Prof. Peter Blaha and WIEN2k users,
Thank you. Now my .machines file is:
lapw0:e0591:4
1:e0591:4
1:e0591:4
granularity:1
extrafine:1
I have installed WIEN2k in my user in the cluster. When I use this script “srun
--pty /bin/bash” then it goes to one node of the cluster, the “ls -als
Your script is still wrong.
The .machines file should show:
lapw0:e0150:4
not
lapw0:e0150
:4
Therefore it tries to execute lapw0 instead of lapw0_mpi.
---
Anyway, the first thing is to make the sequential wien2k running. You
claimed the WIENROOT is known in the batch job.
Please do:
Dear Prof. Peter Blaha,
Thank you for your guides. You are right. I edited the script and added
“source ~/.bashrc, echo 'lapw0:'`hostname`' :'$nproc >> .machines” to it.
The crated .machines file is as follows:
lapw0:e0150
:4
1:e0150:4
1:e0150:4
granularity:1
extrafine:1
The slurm.out
When using the srun setup of WIEN2k it means that you are tightly
integrated into your system and have to follow all your systems default
settings.
For instance you configured CORES_PER_NODE =1; but I very much doubt
that you cluster has only one core per node and srun will probably make
A guess: your srun is setup to use openmpi or something else, not intel
impi which is what you compiled for. Check what you have loaded, e.g. use
"which mpirun".
N.B. testing using lapw0 is simpler.
On Tue, Nov 26, 2019 at 12:07 PM Hanning Chen wrote:
> Dear WIEN2K community,
>
>
>
> I am a
Dear WIEN2K community,
I am a new user of WIEN2K, and just compiled it using the following options:
current:FOPT:-O -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback
-assume buffered_io -I$(MKLROOT)/include
current:FPOPT:-O -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML
Did you try to set MPI_REMOTE to 0 in parallel_options ???
Furthermore your machines file is not ok for lapw1: there is a "speed:"
missing at the beginning.
1:n05-32:10
1:n05-38:10
Actually, with this you are still NOT running lapw1 in mpi-mode on
multiple
Dear WIEN2k users,
I have the following problem. I am trying to do parallel computing on a
cluster. Whenever I run a job on the cluster on one node both the MPI and
k-point parallelization work fine. However, when I try to go to several nodes
the job does not do anything. The script just gets
Hi,
Thank you in advance.
I will check your solución as soon as i could but I am pretty sure it will be
fine.
Seems like the new mkl does not installs cluster libs under a non-commertial
license.
For now, I ve got no errors during the compilation by using the static
libscalapak.a from
The attached hmsec.F for lapwso contains the old and new Scalapack routines.
Add -Dold_scalapack to the parallel compiler options.
Please note: There are cases, where the old Scalapack diagonalization fails.
On 12/23/2016 03:47 PM, cesar wrote:
Hi,
I'm having a problem to get WIEN2k_16
m.tuwien.ac.at
Betreff: [Wien] mpi compilation problem
Hi,
I'm having a problem to get WIEN2k_16 installed.
I can compile wien2k_14.2 perfectly but wien2k_16 is impossible (LIBXC
and ELPA will not be included for now).
The problem seems related with mpi versions for lapw1 and lapwso :
seclr4.o
Hi,
I'm having a problem to get WIEN2k_16 installed.
I can compile wien2k_14.2 perfectly but wien2k_16 is impossible (LIBXC
and ELPA will not be included for now).
The problem seems related with mpi versions for lapw1 and lapwso :
seclr4.o: In function `seclr4_':
OK. If possible try and do a robust patch, i.e. one which is portable.
If you get one please send.
For reference, that call is important in most cases for larger
problems. It is related to the stacksize set for the user (ulimit). If
this is too small large Fortran (and other) programs can crash.
Hi,
Thank you for your quick reply. I am going to investigate the
parallel_options together with the admins of our cluster.
As for your questions:
a) I am able to correctly generate the .machines file, so at least I
know the nodes on which the calculation takes place.
b) I will experiment
This is not so easy, and also this is probably not the only issue you
have. A few key points:
1) The default mechanism to connect is ssh, as this is the most
common. It is setup when you run configure, but can be changed later.
The expectation if you use ssh is that keyless login is setup (e.g.
Dear Wien2k users,
I am trying to set up Wien2k on a (mid-size) computation cluster running
an SGE queueing system. Now, I am a bit confused as to how Wien2k spawns
processes for MPI execution. I am used to the scheme, where mpirun takes
care of spawning its processes across the nodes
Hi, all:
I met some confusions when I try to compare the efficiency of MPI and
multi-thread calculations. In the lapw1 stage of the same case, I found that
MPI will take double time of that with multi-thread. Other than, it even takes
longer time than k-point parallelization without
It is not so easy to give unique answers to this question, as the
performance depends on:
a) your case (size of the problem)
b) your specific hardware (in particular network speed)
c) your mpi and mkl-software (version).
In my experience (but see the above remarks), and this is what is
clearly
I just ran some tests on a system using the mpi benchmark with up to 512
cores and also increased RKMAX from 6.0 to 9.0. Except for 512 cores (which
is slower for reasons I don't understand) everything scaled roughly as
Time = Constant*(Matrix Size)**2/sqrt(Number of Cores)
We seem to have no
I am posting this for general information only. In some cases (rare) the
mpi versions of Wien2k can hang forever when ssh is being used as a
launcher because one of the ssh process has become a zombie. This can occur
with impi and mvapich, perhaps others as well.
One reason (there may be others)
Dear all,
Thanks for replays. In the machine there is no problem to run k-point
parallelized calculations.
The .machines file for the MPI run has the form:
lapw0:localhost:4
1:localhost:4
2:localhost:4
hf:localhost:4
granularity:1
It is a Debian system with sym link /bin/csh -
Dear Wien2k users,
We are running recent version of Wien2k v13.1 in k-point
parallelization. To perform
screened HF we believe that MPI parallelization would speed up our calculations.
The calculations are intended for test reasons to be run on a local
multicore maschine.
Our .machines file
Hi,
I don't know what is the problem, but I can just say that
in .machines there is no line specific for the HF module.
If lapw1 and lapw2 are run in parallel, then this will be the same for hf.
F. Tran
On Tue, 22 Oct 2013, Martin Gmitra wrote:
Dear Wien2k users,
We are running recent
This here appears to be the problem:
@: Expression Syntax.
lapw1cpara (or actually the lapw1para_lapw it links to) is a shell
script, and there appears to be a script syntax issue somewhere during
or after parsing of the .machines file (cf. the Extrafine unset
message printed shortly before).
Sorry, I misread F.Tran's response - the extraneous hf:localhost:4
line might indeed be sufficient to derail the echo|sed|awk machinery in
the script.
--
Dr. Martin Kroekermar...@ruby.chemie.uni-freiburg.de
c/o Prof.Dr. Caroline Roehr
Institut fuer Anorganische und Analytische Chemie
If the jobs are all on the same localhost, then they should all be set up
with the same speed:
lapw0:localhost:4
localhost:4
localhost:4
granularity:1
On Tue, Oct 22, 2013 at 2:21 AM, t...@theochem.tuwien.ac.at wrote:
Hi,
I don't know what is the problem, but I can just say that
in
Wrong syntax. You need a speed parameter. But of course, the speed should be
the same for shared memory:
1:localhost:4
1:localhost:4
Am 22.10.2013 18:42, schrieb Oliver Albertini:
If the jobs are all on the same localhost, then they should all be set up with
the same speed:
lapw0:localhost:4
Wien2k User
I am trying to get the MPI capabilities of Wien running, but I got into some
complication.
The whole compilation process goes fine with no errors, but when I try to run
the code through run_lapw it stops at the begining of the lapw1 program with
the following error:
The addition of the signal trapping in Wien2k (W2kinit in lapw[0-2].F
and others) has a plus, and a minus. The pluses are that the weekly
emails on the list about ulimit associated crashes, and also (perhaps
not so obvious) that mpi tasks die more gracefully. Unfortunately it
also can make knowing
Hello again,
I commented the line call W2kinit, and now I have a more descriptive message
but I am still lost about it. Not sure if it's that its not finding some
libraries or if is that the environments variables are not being propagated to
all nodes.
forrtl: severe (174): SIGSEGV,
Hmmm, more information but not useful. I don't see anything obviously
wrong with what you are doing. Please regress to something simple
(e.g. TiC) -- I know if is not useful to run this with mpi but for a
test it is useful to verify things.
Also, check using
It looks as if your .machines file is OK, I assume that you added the
A*** in front for emailing, but Wien2k does not use a hosts file
itself. I guess that you are using a server at ibm in almaden.
Unfortunately very few people that I know of are running WIen2k on
ibm/aix machines which is going
Thanks to you both for the suggestions. The OS was recently updated beyond
those versions mentioned in the link (now 6100-08).
Adding the iostat statement to all the errclr.f files prevents the program
from stopping altogether although error messages sill appear in the output:
STOP LAPW0 END
Please have a look at the end of case.outputup_* which gives the real cpu
and wall times and post those. It may be that the times being reported are
misleading.
In addition, I do not understand why you are seeing an error and the script
is continuing - it should not. Maybe some of the tasks are
Dear W2K,
On an AIX 560 server with 16 processors, I have been running scf for NiO
supercell (2x2x2) in serial as well as MPI parallel (one kpoint). The
serial version runs fine. When running in parallel, the following error
appears:
STOP LAPW2 - FERMI; weighs written
errclr.f, line 64: 1525-014
I think these are semi-harmless, and you can add ,iostat=i to the
relevant lines. You may need to add the same to any write statements to
unit 99 in errclr.f.
However, your timing seems strange, 6.5 serial versus 9.5 parallel. Is this
CPU time, the WALL time may be more reliable.
STOP LAPW0 END
inilpw.f, line 233: 1525-142 The CLOSE statement on unit 200 cannot
be completed because an errno value of 2 (A file or directory in the
path name does not exist.) was received while closing the file. The
program will stop.
STOP LAPW1 END
If this is on operating system AIX
Dear Prof. Blaha, Prof. Marks and Wien2k community,
I noticed that the siteconfig_lapw defines MPI_REMOTE as
setenv MPI_REMOTE 1
even when one answers 0 to the correspondent question. I had previously
changed it to 0, but I believe that I recompiled something after that and
the value 1 was
N.B., make sure to use the right blacs version when linking, this changes
with the different flavors of mpi. I often forget to do this.
Me too !! :)
Thank you again ! I am aware of your valuable advices !
All the best,
Luis Ogando
2013/2/22 Laurence Marks
Dear Wien2k community,
Is there any recommended flavor and version of an MPI compiler to use
with Intel(R) Fortran Intel(R) 64 Compiler XE for applications running on
Intel(R) 64, Version 12.0.3.174 Build 20110309 ?
All the best,
Luis Ogando
-- next part
One that works.
Some versions of openmpi have problems although that is probably the
best option for the future. There are some tricky issues with openmpi
related to how your flavor of ssh works, there is no standard and some
do not propogate kill commands which means that they can leave
orphans.
Intel-mpi works of course very smoothly, but it is not free ...
Am 20.02.2013 17:29, schrieb Luis Ogando:
Dear Wien2k community,
Is there any recommended flavor and version of an MPI compiler to
use with Intel(R) Fortran Intel(R) 64 Compiler XE for applications
running on Intel(R) 64,
Dear Prof. Marks,
Thank you very much for your prompt answer.
I am using openmpi, but I believe that I am facing some of the tricky
issues you mentioned. I work in a SMP machine and the calculation starts
fine. After some tens of iterations, MPI suddenly asks for a password and
everything
Thank you Prof. Blaha !! By now, this is an infinite potential barrier
to me !! [?]
All the best,
Luis Ogando
2013/2/20 Peter Blaha pblaha at theochem.tuwien.ac.at
Intel-mpi works of course very smoothly, but it is not free ...
Am 20.02.2013 17:29, schrieb Luis
Hmmm.
This is my parallel_options on a machine with openmpi:
setenv USE_REMOTE 1
setenv MPI_REMOTE 0
setenv WIEN_GRANULARITY 1
setenv WIEN_MPIRUN mpirun -x LD_LIBRARY_PATH -x PATH -np _NP_
-machinefile _HOSTS_ _EXEC_
set a=`grep -e 1: .machines | grep -v lapw0 | head -1 | cut -f 3 -d:
| cut -c
at
zeus.theochem.tuwien.ac.at]quot; im Auftrag von quot;Luis Ogando [lcodacal at
gmail.com]
Gesendet: Mittwoch, 20. Februar 2013 17:48
An: A Mailing list for WIEN2k users
Betreff: Re: [Wien] MPI
Thank you Prof. Blaha !! By now, this is an infinite potential barrier to
me !! [cid:360 at goomoji.gmail]
All the best
for WIEN2k users
Betreff: Re: [Wien] MPI
Thank you Prof. Blaha !! By now, this is an infinite potential
barrier to me !! [cid:360 at goomoji.gmail]
All the best,
Luis Ogando
2013/2/20 Peter Blaha pblaha at theochem.tuwien.ac.atmailto:
pblaha at theochem.tuwien.ac.at
On an SMP machine make sure you have in $WIENROOT/parallel_options
setenv USE_REMOTE 0
setenv MPI_REMOTE 0
Am 20.02.2013 17:45, schrieb Luis Ogando:
Dear Prof. Marks,
Thank you very much for your prompt answer.
I am using openmpi, but I believe that I am facing some of the
tricky
I will check it !
Thanks again,
Luis Ogando
2013/2/20 Peter Blaha pblaha at theochem.tuwien.ac.at
On an SMP machine make sure you have in $WIENROOT/parallel_options
setenv USE_REMOTE 0
setenv MPI_REMOTE 0
Am 20.02.2013 17:45, schrieb Luis Ogando:
Dear Prof.
for WIEN2k users wien at zeus.theochem.tuwien.ac.at
Cc:
List-Post: wien@zeus.theochem.tuwien.ac.at
Date: Thu, 26 Apr 2012 11:22:27 -0500
Subject: Re: [Wien] MPI parallel execution take much time than nonparallel
on the same case on the multiprosesor single machine
How many cores do you have, and what
dear all
recently I switch to MPI parallel execution.I compile the
scalapack.2.0.1 and GotoBLAS2 and recompiled wien2k sources.bellow you
will see the compiler option which I set for compiling the wien2k.
Current settings:
O Compiler options:-ffree-form -O2 -ffree-line-length-none
L
How many cores do you have, and what version of mpi are you using?
Running mpi with only 3 processes on one machine is almost certainly
not going to be efficient and for that just stay with non-mpi. With a
dual quadcore (8 cores) or more it can be, provided that the mpi
version you use optimizes
Hi,
I have Wien2K running on a cluster of linux boxes each with 32 cores
and connected by 10Gb ethernet. I have compiled Wien2K by the 3.174 version of
Wien2K (I learned the hard way that bugs in the newer versions of the Intel
compiler lead to crashes in Wien2K). I have also
Thank you very much for your suggestion. I actually managed to figure this out
by myself an hour or so ago. At the same time (usually not a good idea) I also
compiled the mkl interface for fftw2 rather than using the standalone version I
had compiled by myself earlier. Thus the RP library
Read the UG about mpi-parallelization.
It is not supposed to give you any performance for a TiC case. It is useful
ONLY for
larger cases.
Using 5 mpi processes is particular bad. One should divide the
matrices into 2x2, 4x4 or (for your 32 core machines into 4x8, but
not into 1x5, 1x7,
A guess: you are using the wrong version of blacs. You need a
-lmkl_blacs_intelmpi_XX
where XX is the one for your system. I have seen this give the same error.
Use http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/
For reference, with openmpi it is _openmpi_ instead of
1 - 100 of 117 matches
Mail list logo