Re: [Wien] Problem when running MPI-parallel version of LAPW0

2014-10-23 Thread Rémi Arras
Thank you everybody for your answers. For the .machines file, we already have a script and it is well generated. We will try to verify again the links and test another version of the fftw3-library. I will keep you informed if the problem is solved. Best regards, Rémi Arras Le 22/10/2014 14:22

Re: [Wien] Problem when running MPI-parallel version of LAPW0

2014-10-22 Thread Peter Blaha
Usually the "crucial" point for lapw0 is the fftw3-library. I noticed you have fftw-3.3.4, which I never tested. Since fftw is incompatible between fftw2 and 3, maybe they have done something again ... Besides that, I assume you have installed fftw using the same ifor and mpi versions ...

Re: [Wien] Problem when running MPI-parallel version of LAPW0

2014-10-22 Thread Laurence Marks
It is often hard to know exactly what issues are with mpi. Most often it is due to incorrect combinations of scalapack/blacs in the linking options. The first think to check is your linking options with https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/. What you have does not

Re: [Wien] Problem when running MPI-parallel version of LAPW0

2014-10-22 Thread Michael Sluydts
Perhaps an important note: the python script is for a Torque PBS queuing system (based on $PBS_NODEFILE) Rémi Arras schreef op 22/10/2014 13:29: Dear Pr. Blaha, Dear Wien2k users, We tried to install the last version of Wien2k (14.1) on a supercomputer and we are facing some troubles with the

Re: [Wien] Problem when running MPI-parallel version of LAPW0

2014-10-22 Thread Michael Sluydts
Hello Rémi, While I'm not sure this is the (only) problem, in our setup we also give mpirun the machines file: setenv WIEN_MPIRUN "mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_" which I generate based on a 1 k-point per node setup with the following python script: /wienhybrid #!/usr/bin/env