[Pw_forum] PAW+HSE in QE 5.2

2015-07-20 Thread DHIRENDRA VAIDYA
Is PAW+HSE now available in QE 5.2?

-- 
--
Dhirendra
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] ph.x output files in an electron-phonon interaction calculation

2015-07-20 Thread Fred Yang
Hello,

I am currently running ph.x to calculate quantities related to the
electron-phonon interaction in my system. I've noticed that many output
files are written. I know some of the files are important for later
calculations, such as the .dyn files, but I've noticed that there are also
many dvscf files, along with many .xml files. I was wondering if there are
some files I can afford to delete. I'm asking this because the ph.x
electron-phonon interaction calculation is taking up a huge amount of the
available device storage.

Thank you,
Fred Yang
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] the algorithm of Virtual.x and related reference paper

2015-07-20 Thread Yi Wang
Dear developers,

What is the algorithm that virtual.x uses to generate VCA pseudo  
potentials?
Which papers I need to cite when using this tool?

Thank you very much.


Yi Wang
Ph.D candidate at Nanjing University of Science and Technology
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] How to calculate charged system binding energy?

2015-07-20 Thread Bahadır salmankurt
Dear Mostafa Y.

thanks for the useful informations

2015-07-19 5:29 GMT+03:00 Mostafa Youssef :

>  Dear Bahadir
>
> First, charged slabs are problematic because their total energy does not
> converge with respect to vacuum thickness. You can test on a simple model.
> However, there is a trick to go around this by inserting a dopant far a way
> from the critical reaction zone.  For example suppose you want to study +1
> charged defect on ZrO2 surface.  Then you can insert one Y ion(typically 3+
> and hence -1 with respect to Zr4+) .   This should generate a positive
> charge somewhere else in the slab and one hopes that this positive charge
> will localize correctly where you expect it to localize.  But one has to be
> cautious because this also generates a large dipole across the slab. One
> way to go around this by symmetrizing the slab such that dipoles cancel.
>
>
> Second, even if charged slabs work, I think you are not conserving the
> charge when you calculate the binding energy. Because you mentioned you
> used  tot_charge=-1 in the slab+molecule , slab only, molecule only. To me
> this will not conserve the charge when you calculate the binding energy
> (B.E.)
>
> B.E. =(slab+molec.)  - (slab) - (molec)
>
> Although I have seen papers defining binding energies that do not conserve
> the charge, I do not think this is meaningful.
>
> Mostafa Y.
> MIT
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] error in running pw.x command

2015-07-20 Thread mohaddeseh abbasnejad
Dear all,

Thanks for your comments.
I will check them up.

Regards,
Mohaddeseh


On Mon, Jul 20, 2015 at 12:57 PM, nicola varini 
wrote:

> Dear all, if you use mkl you can rely on the intel linking advisor for
> proper linking
> https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor
> If you open the file $MKL_ROOT/include/mkl.h you see the version number.
> It should be something like
>
> #define __INTEL_MKL__ 11
>
> #define __INTEL_MKL_MINOR__ 2
>
> #define __INTEL_MKL_UPDATE__ 2
>
> In the link above put your version number, OS, and other options.
>
> You should get some options in output which you should use for linking.
>
> HTH
>
>
> Nicola
>
>
>
>
> 2015-07-20 9:57 GMT+02:00 Bahadır salmankurt :
>
>> Dear Mohaddeseh et co,
>>
>> installing one of the old version of mpi could solve the problem.
>>
>> 2015-07-20 10:06 GMT+03:00 Ari P Seitsonen :
>>
>>>
>>> Dear Mohaddeseh et co,
>>>
>>>   Just a note: I used to have such problems when I had compiled with
>>> MKL-ScaLAPACK of old version, indeed around 11.1, when I ran with more than
>>> four cores. I think I managed to run when I disabled ScaLAPACK. Of course
>>> this might be fully unrelated to your problem.
>>>
>>> Greetings from Lappeenranta,
>>>
>>>apsi
>>>
>>>
>>> -=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-
>>>   Ari Paavo Seitsonen / ari.p.seitso...@iki.fi /
>>> http://www.iki.fi/~apsi/
>>>   Ecole Normale Supérieure (ENS), Département de Chimie, Paris
>>>   Mobile (F) : +33 789 37 24 25(CH) : +41 79 71 90 935
>>>
>>>
>>>
>>> On Mon, 20 Jul 2015, Paolo Giannozzi wrote:
>>>
>>>  This is not a QE problem: the fortran code knows nothing about nodes
 and cores. It's the software setup for parallel execution on your machine
 that has a problem.

 Paolo

 On Thu, Jul 16, 2015 at 2:25 PM, mohaddeseh abbasnejad <
 m.abbasne...@gmail.com> wrote:

   Dear all,

 I have recently installed PWscf (version 5.1) on our cluster (4 nodes,
 32 cores).
 Ifort & mkl version 11.1 has been installed.
 When I run pw.x command on every node individually, for both the
 following command, it will work properly.
 1- /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
 2- mpirun -n 4 /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
 However, when I use the following command (again for each of them,
 separately),
 3- mpirun -n 8 /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
 it gives me such an error:

 [cluster:14752] *** Process received signal ***
 [cluster:14752] Signal: Segmentation fault (11)
 [cluster:14752] Signal code:  (128)
 [cluster:14752] Failing at address: (nil)
 [cluster:14752] [ 0] /lib64/libpthread.so.0() [0x3a78c0f710]
 [cluster:14752] [ 1]
 /opt/intel/Compiler/11.1/064/mkl/lib/em64t/libmkl_mc3.so(mkl_blas_zdotc+0x79)
 [0x2b5e8e37d4f9]
 [cluster:14752] *** End of error message ***

 --
 mpirun noticed that process rank 4 with PID 14752 on node
 cluster.khayam.local exited on signal 11 (Segmentation fault).

 --

 This error also exists when I use all the node with each other in
 parallel mode (using the following command):
 4- mpirun -n 32 -hostfile testhost /opt/exp_soft/espresso-5.1/bin/pw.x
 -in scf.in
 The error:

 [cluster:14838] *** Process received signal ***
 [cluster:14838] Signal: Segmentation fault (11)
 [cluster:14838] Signal code:  (128)
 [cluster:14838] Failing at address: (nil)
 [cluster:14838] [ 0] /lib64/libpthread.so.0() [0x3a78c0f710]
 [cluster:14838] [ 1]
 /opt/intel/Compiler/11.1/064/mkl/lib/em64t/libmkl_mc3.so(mkl_blas_zdotc+0x79)
 [0x2b04082cf4f9]
 [cluster:14838] *** End of error message ***

 --
 mpirun noticed that process rank 24 with PID 14838 on node
 cluster.khayam.local exited on signal 11 (Segmentation fault).

 --

 Any help will be appreciated.

 Regards,
 Mohaddeseh

 -

 Mohaddeseh Abbasnejad,
 Room No. 323, Department of Physics,
 University of Tehran, North Karegar Ave.,
 Tehran, P.O. Box: 14395-547- IRAN
 Tel. No.: +98 21 6111 8634  & Fax No.: +98 21 8800 4781
 Cellphone: +98 917 731 7514
 E-Mail: m.abbasne...@gmail.com
 Website:  http://physics.ut.ac.ir

 -

 ___
 Pw_forum mailing list
 Pw_forum@pwscf.org
 

Re: [Pw_forum] (no subject)

2015-07-20 Thread ashkan shekaari
Dear Giannozzi
Is a mos2 bilayer an inhomogeneous system?

Kind regards
Ashkan Shekaari
Tell: +98 933 459 7122; +98 921 346 7384
On Jul 20, 2015 12:47 AM, "Paolo Giannozzi"  wrote:

> A system with a strongly inhomogeneous charge density (e.g. a surface)
>
> Paolo
>
> On Sun, Jul 19, 2015 at 7:57 PM, ashkan shekaari 
> wrote:
>
>> What is the meaning of highly inhomogeneous system?
>>
>> Kind regards
>> Ashkan Shekaari
>> Tell: +98 933 459 7122; +98 921 346 7384
>> On Jul 19, 2015 10:20 PM, "ashkan shekaari"  wrote:
>>
>>> Dear users
>>> Can I use local-TF for mono layer and bilayer mos2?
>>>
>>> Kind regards
>>> Ashkan Shekaari
>>> Tell: +98 933 459 7122; +98 921 346 7384
>>>
>>> ___
>>> Pw_forum mailing list
>>> Pw_forum@pwscf.org
>>> http://pwscf.org/mailman/listinfo/pw_forum
>>>
>>
>> ___
>> Pw_forum mailing list
>> Pw_forum@pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
>
>
>
> --
> Paolo Giannozzi, Dept. Chemistry,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] error in running pw.x command

2015-07-20 Thread nicola varini
Dear all, if you use mkl you can rely on the intel linking advisor for
proper linking
https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor
If you open the file $MKL_ROOT/include/mkl.h you see the version number.
It should be something like

#define __INTEL_MKL__ 11

#define __INTEL_MKL_MINOR__ 2

#define __INTEL_MKL_UPDATE__ 2

In the link above put your version number, OS, and other options.

You should get some options in output which you should use for linking.

HTH


Nicola




2015-07-20 9:57 GMT+02:00 Bahadır salmankurt :

> Dear Mohaddeseh et co,
>
> installing one of the old version of mpi could solve the problem.
>
> 2015-07-20 10:06 GMT+03:00 Ari P Seitsonen :
>
>>
>> Dear Mohaddeseh et co,
>>
>>   Just a note: I used to have such problems when I had compiled with
>> MKL-ScaLAPACK of old version, indeed around 11.1, when I ran with more than
>> four cores. I think I managed to run when I disabled ScaLAPACK. Of course
>> this might be fully unrelated to your problem.
>>
>> Greetings from Lappeenranta,
>>
>>apsi
>>
>>
>> -=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-
>>   Ari Paavo Seitsonen / ari.p.seitso...@iki.fi / http://www.iki.fi/~apsi/
>>   Ecole Normale Supérieure (ENS), Département de Chimie, Paris
>>   Mobile (F) : +33 789 37 24 25(CH) : +41 79 71 90 935
>>
>>
>>
>> On Mon, 20 Jul 2015, Paolo Giannozzi wrote:
>>
>>  This is not a QE problem: the fortran code knows nothing about nodes and
>>> cores. It's the software setup for parallel execution on your machine that
>>> has a problem.
>>>
>>> Paolo
>>>
>>> On Thu, Jul 16, 2015 at 2:25 PM, mohaddeseh abbasnejad <
>>> m.abbasne...@gmail.com> wrote:
>>>
>>>   Dear all,
>>>
>>> I have recently installed PWscf (version 5.1) on our cluster (4 nodes,
>>> 32 cores).
>>> Ifort & mkl version 11.1 has been installed.
>>> When I run pw.x command on every node individually, for both the
>>> following command, it will work properly.
>>> 1- /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
>>> 2- mpirun -n 4 /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
>>> However, when I use the following command (again for each of them,
>>> separately),
>>> 3- mpirun -n 8 /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
>>> it gives me such an error:
>>>
>>> [cluster:14752] *** Process received signal ***
>>> [cluster:14752] Signal: Segmentation fault (11)
>>> [cluster:14752] Signal code:  (128)
>>> [cluster:14752] Failing at address: (nil)
>>> [cluster:14752] [ 0] /lib64/libpthread.so.0() [0x3a78c0f710]
>>> [cluster:14752] [ 1]
>>> /opt/intel/Compiler/11.1/064/mkl/lib/em64t/libmkl_mc3.so(mkl_blas_zdotc+0x79)
>>> [0x2b5e8e37d4f9]
>>> [cluster:14752] *** End of error message ***
>>>
>>> --
>>> mpirun noticed that process rank 4 with PID 14752 on node
>>> cluster.khayam.local exited on signal 11 (Segmentation fault).
>>>
>>> --
>>>
>>> This error also exists when I use all the node with each other in
>>> parallel mode (using the following command):
>>> 4- mpirun -n 32 -hostfile testhost /opt/exp_soft/espresso-5.1/bin/pw.x
>>> -in scf.in
>>> The error:
>>>
>>> [cluster:14838] *** Process received signal ***
>>> [cluster:14838] Signal: Segmentation fault (11)
>>> [cluster:14838] Signal code:  (128)
>>> [cluster:14838] Failing at address: (nil)
>>> [cluster:14838] [ 0] /lib64/libpthread.so.0() [0x3a78c0f710]
>>> [cluster:14838] [ 1]
>>> /opt/intel/Compiler/11.1/064/mkl/lib/em64t/libmkl_mc3.so(mkl_blas_zdotc+0x79)
>>> [0x2b04082cf4f9]
>>> [cluster:14838] *** End of error message ***
>>>
>>> --
>>> mpirun noticed that process rank 24 with PID 14838 on node
>>> cluster.khayam.local exited on signal 11 (Segmentation fault).
>>>
>>> --
>>>
>>> Any help will be appreciated.
>>>
>>> Regards,
>>> Mohaddeseh
>>>
>>> -
>>>
>>> Mohaddeseh Abbasnejad,
>>> Room No. 323, Department of Physics,
>>> University of Tehran, North Karegar Ave.,
>>> Tehran, P.O. Box: 14395-547- IRAN
>>> Tel. No.: +98 21 6111 8634  & Fax No.: +98 21 8800 4781
>>> Cellphone: +98 917 731 7514
>>> E-Mail: m.abbasne...@gmail.com
>>> Website:  http://physics.ut.ac.ir
>>>
>>> -
>>>
>>> ___
>>> Pw_forum mailing list
>>> Pw_forum@pwscf.org
>>> http://pwscf.org/mailman/listinfo/pw_forum
>>>
>>>
>>>
>>>
>>> --
>>> Paolo Giannozzi, Dept. Chemistry,
>>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>>> Phone +39-0432-558216, fax +39-0432-558222
>>>
>>>
>> ___
>> Pw_forum mailing list
>> Pw_forum@pwscf.org
>> 

Re: [Pw_forum] error in running pw.x command

2015-07-20 Thread Bahadır salmankurt
Dear Mohaddeseh et co,

installing one of the old version of mpi could solve the problem.

2015-07-20 10:06 GMT+03:00 Ari P Seitsonen :

>
> Dear Mohaddeseh et co,
>
>   Just a note: I used to have such problems when I had compiled with
> MKL-ScaLAPACK of old version, indeed around 11.1, when I ran with more than
> four cores. I think I managed to run when I disabled ScaLAPACK. Of course
> this might be fully unrelated to your problem.
>
> Greetings from Lappeenranta,
>
>apsi
>
>
> -=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-
>   Ari Paavo Seitsonen / ari.p.seitso...@iki.fi / http://www.iki.fi/~apsi/
>   Ecole Normale Supérieure (ENS), Département de Chimie, Paris
>   Mobile (F) : +33 789 37 24 25(CH) : +41 79 71 90 935
>
>
>
> On Mon, 20 Jul 2015, Paolo Giannozzi wrote:
>
>  This is not a QE problem: the fortran code knows nothing about nodes and
>> cores. It's the software setup for parallel execution on your machine that
>> has a problem.
>>
>> Paolo
>>
>> On Thu, Jul 16, 2015 at 2:25 PM, mohaddeseh abbasnejad <
>> m.abbasne...@gmail.com> wrote:
>>
>>   Dear all,
>>
>> I have recently installed PWscf (version 5.1) on our cluster (4 nodes, 32
>> cores).
>> Ifort & mkl version 11.1 has been installed.
>> When I run pw.x command on every node individually, for both the
>> following command, it will work properly.
>> 1- /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
>> 2- mpirun -n 4 /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
>> However, when I use the following command (again for each of them,
>> separately),
>> 3- mpirun -n 8 /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
>> it gives me such an error:
>>
>> [cluster:14752] *** Process received signal ***
>> [cluster:14752] Signal: Segmentation fault (11)
>> [cluster:14752] Signal code:  (128)
>> [cluster:14752] Failing at address: (nil)
>> [cluster:14752] [ 0] /lib64/libpthread.so.0() [0x3a78c0f710]
>> [cluster:14752] [ 1]
>> /opt/intel/Compiler/11.1/064/mkl/lib/em64t/libmkl_mc3.so(mkl_blas_zdotc+0x79)
>> [0x2b5e8e37d4f9]
>> [cluster:14752] *** End of error message ***
>> --
>> mpirun noticed that process rank 4 with PID 14752 on node
>> cluster.khayam.local exited on signal 11 (Segmentation fault).
>> --
>>
>> This error also exists when I use all the node with each other in
>> parallel mode (using the following command):
>> 4- mpirun -n 32 -hostfile testhost /opt/exp_soft/espresso-5.1/bin/pw.x
>> -in scf.in
>> The error:
>>
>> [cluster:14838] *** Process received signal ***
>> [cluster:14838] Signal: Segmentation fault (11)
>> [cluster:14838] Signal code:  (128)
>> [cluster:14838] Failing at address: (nil)
>> [cluster:14838] [ 0] /lib64/libpthread.so.0() [0x3a78c0f710]
>> [cluster:14838] [ 1]
>> /opt/intel/Compiler/11.1/064/mkl/lib/em64t/libmkl_mc3.so(mkl_blas_zdotc+0x79)
>> [0x2b04082cf4f9]
>> [cluster:14838] *** End of error message ***
>> --
>> mpirun noticed that process rank 24 with PID 14838 on node
>> cluster.khayam.local exited on signal 11 (Segmentation fault).
>> --
>>
>> Any help will be appreciated.
>>
>> Regards,
>> Mohaddeseh
>>
>> -
>>
>> Mohaddeseh Abbasnejad,
>> Room No. 323, Department of Physics,
>> University of Tehran, North Karegar Ave.,
>> Tehran, P.O. Box: 14395-547- IRAN
>> Tel. No.: +98 21 6111 8634  & Fax No.: +98 21 8800 4781
>> Cellphone: +98 917 731 7514
>> E-Mail: m.abbasne...@gmail.com
>> Website:  http://physics.ut.ac.ir
>>
>> -
>>
>> ___
>> Pw_forum mailing list
>> Pw_forum@pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
>>
>>
>>
>> --
>> Paolo Giannozzi, Dept. Chemistry,
>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>> Phone +39-0432-558216, fax +39-0432-558222
>>
>>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] error in running pw.x command

2015-07-20 Thread Ari P Seitsonen


Dear Mohaddeseh et co,

  Just a note: I used to have such problems when I had compiled with 
MKL-ScaLAPACK of old version, indeed around 11.1, when I ran with more 
than four cores. I think I managed to run when I disabled ScaLAPACK. Of 
course this might be fully unrelated to your problem.


Greetings from Lappeenranta,

   apsi

-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-
  Ari Paavo Seitsonen / ari.p.seitso...@iki.fi / http://www.iki.fi/~apsi/
  Ecole Normale Supérieure (ENS), Département de Chimie, Paris
  Mobile (F) : +33 789 37 24 25(CH) : +41 79 71 90 935


On Mon, 20 Jul 2015, Paolo Giannozzi wrote:


This is not a QE problem: the fortran code knows nothing about nodes and cores. 
It's the software setup for parallel execution on your machine that has a 
problem.

Paolo

On Thu, Jul 16, 2015 at 2:25 PM, mohaddeseh abbasnejad  
wrote:

  Dear all,

I have recently installed PWscf (version 5.1) on our cluster (4 nodes, 32 
cores).
Ifort & mkl version 11.1 has been installed.
When I run pw.x command on every node individually, for both the following 
command, it will work properly.
1- /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
2- mpirun -n 4 /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
However, when I use the following command (again for each of them, separately),
3- mpirun -n 8 /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
it gives me such an error:

[cluster:14752] *** Process received signal ***
[cluster:14752] Signal: Segmentation fault (11)
[cluster:14752] Signal code:  (128)
[cluster:14752] Failing at address: (nil)
[cluster:14752] [ 0] /lib64/libpthread.so.0() [0x3a78c0f710]
[cluster:14752] [ 1] 
/opt/intel/Compiler/11.1/064/mkl/lib/em64t/libmkl_mc3.so(mkl_blas_zdotc+0x79) 
[0x2b5e8e37d4f9]
[cluster:14752] *** End of error message ***
--
mpirun noticed that process rank 4 with PID 14752 on node cluster.khayam.local 
exited on signal 11 (Segmentation fault).
--

This error also exists when I use all the node with each other in parallel mode 
(using the following command):
4- mpirun -n 32 -hostfile testhost /opt/exp_soft/espresso-5.1/bin/pw.x -in 
scf.in
The error:

[cluster:14838] *** Process received signal ***
[cluster:14838] Signal: Segmentation fault (11)
[cluster:14838] Signal code:  (128)
[cluster:14838] Failing at address: (nil)
[cluster:14838] [ 0] /lib64/libpthread.so.0() [0x3a78c0f710]
[cluster:14838] [ 1] 
/opt/intel/Compiler/11.1/064/mkl/lib/em64t/libmkl_mc3.so(mkl_blas_zdotc+0x79) 
[0x2b04082cf4f9]
[cluster:14838] *** End of error message ***
--
mpirun noticed that process rank 24 with PID 14838 on node cluster.khayam.local 
exited on signal 11 (Segmentation fault).
--

Any help will be appreciated.

Regards,
Mohaddeseh

-

Mohaddeseh Abbasnejad,
Room No. 323, Department of Physics,
University of Tehran, North Karegar Ave.,
Tehran, P.O. Box: 14395-547- IRAN
Tel. No.: +98 21 6111 8634  & Fax No.: +98 21 8800 4781
Cellphone: +98 917 731 7514
E-Mail:     m.abbasne...@gmail.com
Website:  http://physics.ut.ac.ir

-

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum




--
Paolo Giannozzi, Dept. Chemistry,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] error in running pw.x command

2015-07-20 Thread Paolo Giannozzi
This is not a QE problem: the fortran code knows nothing about nodes and
cores. It's the software setup for parallel execution on your machine that
has a problem.

Paolo

On Thu, Jul 16, 2015 at 2:25 PM, mohaddeseh abbasnejad <
m.abbasne...@gmail.com> wrote:

>
> Dear all,
>
> I have recently installed PWscf (version 5.1) on our cluster (4 nodes, 32
> cores).
> Ifort & mkl version 11.1 has been installed.
> When I run pw.x command on every node individually, for both the following
> command, it will work properly.
> 1- /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
> 2- mpirun -n 4 /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
> However, when I use the following command (again for each of them,
> separately),
> 3- mpirun -n 8 /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
> it gives me such an error:
>
> [cluster:14752] *** Process received signal ***
> [cluster:14752] Signal: Segmentation fault (11)
> [cluster:14752] Signal code:  (128)
> [cluster:14752] Failing at address: (nil)
> [cluster:14752] [ 0] /lib64/libpthread.so.0() [0x3a78c0f710]
> [cluster:14752] [ 1]
> /opt/intel/Compiler/11.1/064/mkl/lib/em64t/libmkl_mc3.so(mkl_blas_zdotc+0x79)
> [0x2b5e8e37d4f9]
> [cluster:14752] *** End of error message ***
> --
> mpirun noticed that process rank 4 with PID 14752 on node
> cluster.khayam.local exited on signal 11 (Segmentation fault).
> --
>
> This error also exists when I use all the node with each other in parallel
> mode (using the following command):
> 4- mpirun -n 32 -hostfile testhost /opt/exp_soft/espresso-5.1/bin/pw.x -in
> scf.in
> The error:
>
> [cluster:14838] *** Process received signal ***
> [cluster:14838] Signal: Segmentation fault (11)
> [cluster:14838] Signal code:  (128)
> [cluster:14838] Failing at address: (nil)
> [cluster:14838] [ 0] /lib64/libpthread.so.0() [0x3a78c0f710]
> [cluster:14838] [ 1]
> /opt/intel/Compiler/11.1/064/mkl/lib/em64t/libmkl_mc3.so(mkl_blas_zdotc+0x79)
> [0x2b04082cf4f9]
> [cluster:14838] *** End of error message ***
> --
> mpirun noticed that process rank 24 with PID 14838 on node
> cluster.khayam.local exited on signal 11 (Segmentation fault).
> --
>
> Any help will be appreciated.
>
> Regards,
> Mohaddeseh
>
> -
>
> Mohaddeseh Abbasnejad,
> Room No. 323, Department of Physics,
> University of Tehran, North Karegar Ave.,
> Tehran, P.O. Box: 14395-547- IRAN
> Tel. No.: +98 21 6111 8634  & Fax No.: +98 21 8800 4781
> Cellphone: +98 917 731 7514
> E-Mail: m.abbasne...@gmail.com
> Website:  http://physics.ut.ac.ir
>
> -
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>



-- 
Paolo Giannozzi, Dept. Chemistry,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum