[Pw_forum] Help regrading error from test_input_xml: Empty input file .. stopping

2014-03-15 Thread Torstein Fjermestad
Dear Bramha Prasad Pandey,

I do not remember how I solved it. I think I ended up running QE on 
another machine.

Regards,
Torstein


On 2014-03-14 08:29, Bramha Pandey wrote:
> Dear All PW developers, Users and specially? to Prof. Torstein,I am?
> getting the error? from test_input_xml: Empty input file .. stopping,
> when trying to run a parallel program on PC cluster( only 3 node) each
> having 4GB RAM and 500GB hard disk.
>  As you (Prof. Torstein)? have got this type of error mentioned in
> http://qe-forge.org/pipermail/pw_forum/2012-March/098067.html [1]
> 
> Can you Kindly give me some hints how i can get rid of from this type 
> of error?
> I shall be highly appreciated for your kind cooperation in this 
> regards.
> 
> --
> 
> Thanks and Regards
> Bramha Prasad Pandey
> GLA University
> Mathura (U.P)
> INDIA.
> 
> 
> Links:
> --
> [1] http://qe-forge.org/pipermail/pw_forum/2012-March/098067.html


[Pw_forum] Some questions about vibrational modes

2012-10-19 Thread Torstein Fjermestad
 Dear Prof. Baroni,

 First of all, thanks for your help. It is not easy to give an accurate 
 description of a structure and its vibrational modes in an e-mail. I 
 think therefore it would be better to attach a .axsf file so that it 
 would be possible to visualize the modes in xcrysden. Unfortunately, the 
 e-mail with the file attached was rejected because of a too large size 
 (560 kB). However, if you (or anyone else) are interested in having a 
 look at the vibrational modes, I could send the .axsf file directly to 
 you.

 Concerning your questions / comments on the translational modes of the 
 Si(OH)4 species.
 I have not been able to identify a rattle mode in the z direction, and 
 I suspect that your
 suggestion that this mode might be tightly coupled to the cage motion 
 is correct. This mode should, by the way, be precisely mode 1, were the 
 whole unit cell is translating in the z direction.
 The difference of the Si(OH)4 translation in the xy plane and the z 
 direction could indeed be due to anisotropy. The Si(OH)4 species is 
 located closer to one of the walls (presumably by non-bonding dispersion 
 interactions) and it might be that the translation in z direction is 
 hindered somehow.

 My problem concerns the computation of Gibbs free energy. However, I do 
 not know whether it is correct to
 exclude modes 3 and 4 from the computation as these modes also involve 
 motions of the Si(OH)4 species in addition to the unit cell translation. 
 I would appreciate very much if someone could help me with this problem.

 Thanks in advance.

 Yours sincerely,

 Torstein Fjermestad
 University of Oslo,
 Norway


 
 
 


 On Sun, 14 Oct 2012 19:27:15 +0200, Stefano Baroni  
 wrote:
> On Oct 12, 2012, at 9:50 AM, Torstein Fjermestad wrote:
>
> With the purpose of obtaining the Gibbs free energy, I am computing
> the
>  vibrational modes of a system. The system is a microporous zeotype
>  material with an extra-framework species (Si(OH)4) located in the
> pore.
>  The vibrational modes were obtained by first optimizing the 
> structure
>
>  with increased accuracy ( forc_conv_thr=1.0d-5,
> etot_conv_thr=1.0d-6).
>  Thereafter I did a phonon calculation with the tr2_ph option set to
>  1.0e-14. I had expected three of the vibrational modes to correspond
> to
>  a translation of all atoms in the unit cell in one direction. This
>  happens indeed for the lowest mode (mode 1, frequency = 3 cm-1) 
> which
>
>  corresponds to a translation in the z direction. Modes 3 (frequency 
> =
> 29
>  cm-1) and 4 (frequency = 30 cm-1) show a translation of the 
> framework
>
>  atoms in the y and x direction respectively, but the Si(OH)4 species
> is
>  not translating with the framework atoms. Is such a behavior
> expected?
>
> NO. This however may happen if your system has other "quasi-soft" 
> mode
> that are almost degenerate with acoustic modes. I know very little of
> your system but I can imagine that the "extra-framework" species is
> associated with "rattle" modes that, if the cage is larger than the
> species itself may become soft. I do not know if the difference in 
> the
> behaviors along the z axis and in the xy plane may be due to the
> anisotropy of the cage, or if by accident (i.e. numerics) the modes
> rattle mode along z is more tightly coupled with the cage motion than
> the xy modes are ... Please let us know more.
>
> WHAT ABOUT MODE #2?
>
> When calculating the vibrational contribution to the Gibbs free
> energy,
>  one should not include modes corresponding to a translation of the
> whole
>  unit cell, but what about cases such as mode 3 and 4 where the
> Si(OH)4
>  species is not translating with the rest of the atoms? How are such
>  cases treated correctly?
>
> I think they should be treated as low-frequency modes, as they
> probably are. (not sure about the numbering, though: one of them is
> probably a mized rattle-translation mode)
>
> Another issue concerns the extra-framework species Si(OH)4. In mode 2
>  (frequency = 23 cm-1) and 5 (frequency = 51 cm-1) this is being
>  translated while the rest of the atoms are relatively static.
> Wouldn't
>  it be better to treat these modes as translations instead of
> vibrations?
>  In that case, how is this done correctly?
>
> ah! ah! you said it! mode 2 is a rattle mode, and so is mode 5! I am
> curious about the rattle mode in the z direction. Has it a larger
> frequency? Can you make a sense of it?
>
> Thank you in advance for your help.
>
> Hope you had some ...
>
> Cheers - SB
>
> ---
> Stefano Baroni -  http://stefano.baroni.me [1], stefanobaroni (skype)
> on leave of absence from SISSA, Trieste, presently at the Department
> of Materials, EPF Lausanne (untill March 2013)
>
> La morale est une logique de l'action comme la logique est une morale
> de la pens?e - Jean Piaget
>
>
>
> Links:
> --
> [1] http://stefano.baroni.me/



[Pw_forum] Some questions about vibrational modes

2012-10-12 Thread Torstein Fjermestad
 Dear all,

 With the purpose of obtaining the Gibbs free energy, I am computing the 
 vibrational modes of a system. The system is a microporous zeotype 
 material with an extra-framework species (Si(OH)4) located in the pore. 
 The vibrational modes were obtained by first optimizing the structure 
 with increased accuracy ( forc_conv_thr=1.0d-5, etot_conv_thr=1.0d-6). 
 Thereafter I did a phonon calculation with the tr2_ph option set to 
 1.0e-14. I had expected three of the vibrational modes to correspond to 
 a translation of all atoms in the unit cell in one direction. This 
 happens indeed for the lowest mode (mode 1, frequency = 3 cm-1) which 
 corresponds to a translation in the z direction. Modes 3 (frequency = 29 
 cm-1)  and 4 (frequency = 30 cm-1) show a translation of the framework 
 atoms in the y and x direction respectively, but the Si(OH)4 species is 
 not translating with the framework atoms. Is such a behavior expected?

 When calculating the vibrational contribution to the Gibbs free energy, 
 one should not include modes corresponding to a translation of the whole 
 unit cell, but what about cases such as mode 3 and 4 where the Si(OH)4 
 species is not translating with the rest of the atoms? How are such 
 cases treated correctly?

 Another issue concerns the extra-framework species Si(OH)4. In mode 2 
 (frequency = 23 cm-1) and 5 (frequency = 51 cm-1) this is being 
 translated while the rest of the atoms are relatively static. Wouldn't 
 it be better to treat these modes as translations instead of vibrations? 
 In that case, how is this done correctly?

 Thank you in advance for your help.

 Yours sincerely,

 Torstein Fjermestad
 University of Oslo,
 Norway





[Pw_forum] parallelization of phonon calculations

2012-07-25 Thread Torstein Fjermestad
 Program PHONON v.4.3.2

 On Wed, 25 Jul 2012 13:40:39 +0200, Paolo Giannozzi 
  wrote:
> Which version of the phonon code have you used? P.
> ---
> Paolo Giannozzi, Dept of Chemistry,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
>
>
>
>
> ___
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum



[Pw_forum] parallelization of phonon calculations

2012-07-25 Thread Torstein Fjermestad
 Dear all,

 I am planing to do a phonon calculation on a zeolite structure. The 
 unit cell contains 37 atoms, the space group is P1, and the number of 
 irreducible representations is therefore 3*37=111. Naturally I would 
 like to parallelize this calculation as efficiently as possible.

 From the documentation of the Quantum Espresso web page I see that the 
 irreducible representations can be grouped into "images" that can be 
 computed largely independently. The following example is given:

 mpirun -np 64 ph.x -nimage 8 -npool 2 ...

 After a slight modification, I executed the following command line:

 mpirun -np 256 -npernode 8 ph.x -nimage 16 < struct0-phonon.inp > 
 struct0-phonon.out


 The calculation terminates normally, but the result is not what I 
 expected. Almost at the beginning of the output file the following it 
 printed:

  Atomic displacements:
  There are 111 irreducible representations

  Representation 1  1 modes - To be done

  Representation 2  1 modes - To be done

  Representation 3  1 modes - To be done

  Representation 4  1 modes - To be done

  Representation 5  1 modes - To be done

  Representation 6  1 modes - To be done

  Representation 7  1 modes - Not done in this run

  Representation 8  1 modes - Not done in this run

  Representation 9  1 modes - Not done in this run

 -
 The message "Not done in this run" is printed for Representation 7 to 
 111.

 Towards the end of the output file the frequencies are printed. 
 Frequencies 1 to 6 are slightly negative, frequencies 7 to 105 are 
 virtually zero, and frequencies 106 to 111 are slightly positive.

 The first and last part of the mentioned output section are shown 
 below:


  omega( 1) =  -7.228419 [THz] =-241.114112 [cm-1]
  omega( 2) =  -7.052563 [THz] =-235.248188 [cm-1]
  omega( 3) =  -6.661820 [THz] =-222.214395 [cm-1]
  omega( 4) =  -6.390946 [THz] =-213.179028 [cm-1]
  omega( 5) =  -6.252886 [THz] =-208.573825 [cm-1]
  omega( 6) =  -5.895227 [THz] =-196.643621 [cm-1]
  omega( 7) =  -0.01 [THz] =  -0.19 [cm-1]
  omega( 8) =  -0.01 [THz] =  -0.18 [cm-1]
  omega( 9) =  -0.01 [THz] =  -0.18 [cm-1]
  omega(10) =   0.00 [THz] =  -0.17 [cm-1]
  omega(11) =   0.00 [THz] =  -0.17 [cm-1]
  omega(12) =   0.00 [THz] =  -0.15 [cm-1]


  omega(98) =   0.00 [THz] =   0.14 [cm-1]
  omega(99) =   0.00 [THz] =   0.14 [cm-1]
  omega(**) =   0.00 [THz] =   0.15 [cm-1]
  omega(**) =   0.00 [THz] =   0.16 [cm-1]
  omega(**) =   0.00 [THz] =   0.16 [cm-1]
  omega(**) =   0.00 [THz] =   0.16 [cm-1]
  omega(**) =   0.00 [THz] =   0.16 [cm-1]
  omega(**) =   0.01 [THz] =   0.17 [cm-1]
  omega(**) =  16.552716 [THz] = 552.139187 [cm-1]
  omega(**) =  16.683450 [THz] = 556.499976 [cm-1]
  omega(**) =  17.480012 [THz] = 583.070434 [cm-1]
  omega(**) =  17.635841 [THz] = 588.268333 [cm-1]
  omega(**) =  17.837822 [THz] = 595.005680 [cm-1]
  omega(**) =  18.630530 [THz] = 621.447600 [cm-1]
  
 **

 I have a feeling that I have severely misunderstood some important 
 concepts. I would therefore appreciate if someone could give an 
 explanation on how to correctly perform a parallel phonon calculation.

 Thank you very much in advance for your help.

 Yours sincerely,

 Torstein Fjermestad
 University of Oslo
 Norway

 



 



 


[Pw_forum] problem with neb calculations / openmpi

2012-03-19 Thread Torstein Fjermestad
 Dear Layla,

 The file is attached.
 Thank you very much for your help.

 Yours sincerely,

 Torstein Fjermestad




 On Mon, 19 Mar 2012 14:53:29 +0100, Layla Martin-Samos 
  wrote:
> Dear Torstein could you send the file input.inp, just to try to
> reproduce the error in an other machine?
>
> bests
>
> Layla
>

-- next part --
A non-text attachment was scrubbed...
Name: neb_12.inp
Type: application/octet-stream
Size: 9937 bytes
Desc: not available
Url : 
http://www.democritos.it/pipermail/pw_forum/attachments/20120319/aa33f157/attachment-0001.obj
 


[Pw_forum] problem with neb calculations / openmpi

2012-03-19 Thread Torstein Fjermestad
 Dear Prof. Giannozzi,

 Thanks for the suggestion.
 The two tests I referred to were both run with image parallelization 
 (16 processors and 8 images).
 The tests were run with the same input file and submit script. The 
 command line was as follows:

 mpirun -np 16 -npernode 8 neb.x -nimage 8 -inp input.inp > output.out

 In this case the job is submitted and is labelled as "running". It 
 stays like this until the end of the requested time, but it produces no 
 output. At the end of the file slurm-.out the following message 
 is printed:



 slurmd[compute-14-6]: *** JOB 9146164 CANCELLED AT 2012-03-15T23:20:09 
 DUE TO TIME LIMIT ***
 mpirun: killing job...

 Job 9146164 ("neb_11") completed on compute-14-[6-7] at Thu Mar 15 
 23:20:09 CET 2012
 --
 mpirun noticed that process rank 0 with PID 523 on node 
 compute-14-6.local exited on signal 0 (Unknown signal 0).
 --
 [compute-14-6.local:00516] [[31454,0],0]-[[31454,0],1] 
 mca_oob_tcp_msg_recv: readv failed: Connection reset by peer (104)
 mpirun: clean termination accomplished

 


 When removing the image parallelization by either setting -nimage 1 or 
 removing the option altogether (but still running on 16 processors), the 
 job only runs for a few seconds. At the end of the file 
 slurm-.out  the following message is printed:



  from test_input_xml: Empty input file .. stopping
 --
 mpirun has exited due to process rank 11 with PID 32678 on
 node compute-14-13 exiting without calling "finalize". This may
 have caused other processes in the application to be
 terminated by signals sent by mpirun (as reported here).
 --
 Job 9163874 ("neb_13") completed on compute-14-[12-13] at Sat Mar 17 
 19:55:23 CET 2012


 I found in particular the line "from test_input_xml: Empty input file 
 .. stopping" interesting. The program stops because it thinks a file is 
 empty.

 Although I did not get much closer to having a running program, I 
 thought that this change in behavior was interesting. Maybe it can give 
 you (or someone else) a hint on what is going on.

 Of cause this erroneous behavior may have other causes, such as a 
 machine related issue, the openmpi environment, the installation 
 procedure, etc. However, before contacting to the sysadmin, I would like 
 to rule out (to the extent possible) any issues related to quantum 
 espresso itself.

 Thanks in advance.

 Yours sincerely,
 Torstein Fjermestad
 University of Oslo,
 Norway

 
 



 On Thu, 15 Mar 2012 22:26:22 +0100, Paolo Giannozzi 
  wrote:
> On Mar 15, 2012, at 20:48 , Torstein Fjermestad wrote:
>
>>  pw.x now works without problem, but neb.x only works when one node
>>  (with 8 processors) is requested. I have run two tests requesting 
>> two
>>  nodes (16 processors) and in both cases I see the same erroneous
>>  behavior:
>
> with "image" parallelization in both cases? can you run neb.x with 1  
> image
> and 16 processors?
>
>
> P.
> ---
> Paolo Giannozzi, Dept of Chemistry,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222



[Pw_forum] problem with neb calculations / openmpi

2012-03-15 Thread Torstein Fjermestad
 Dear Dr. Kohlmeyer,

 Thank you for your suggestion. Since yesterday I have made some 
 progress, and I know think I see a more systematic behavior. What I did 
 was first to recompile pw.x and neb.x with a newer version of openmpi 
 (openmpi/1.4.3.gnu). Apparently this made the openmpi error message 
 disappear.
 pw.x now works without problem, but neb.x only works when one node 
 (with 8 processors) is requested. I have run two tests requesting two 
 nodes (16 processors) and in both cases I see the same erroneous 
 behavior:

 The output file is only 13 lines long and the last three lines are as 
 follows:

  Parallel version (MPI), running on 2 processors
  path-images division:  nimage=8
  R & G space division:  proc/pool =2

 In the working directory of the calculations the files of type out.5_0 
 contain several million repetitions of the error message

   Message from routine  read_line :
   read error

 I think this behavior is general because it is rather unlikely that 
 both calculations accidentally are submitted to a defect node. To me it 
 seems like there is some kind of failure in the communication between 
 the nodes.

 I should certainly contact the sysadmin of the machine, but in order to 
 make their work easier, I would like to make sure whether the erroneous 
 behavior is caused by the machine or by the compilation/installation of 
 quantum espresso.

 If anyone has had similar experiences before, it would be nice if you 
 could share ideas on possible causes.

 Thanks in advance.

 Yours sincerely,

 Torstein Fjermestad
 University of Oslo,
 Norway

 
 On Wed, 14 Mar 2012 16:36:15 -0400, Axel Kohlmeyer  
 wrote:
> On Wed, Mar 14, 2012 at 4:18 PM, Torstein Fjermestad
>  wrote:
>> ?Dear all,
>>
>> ?I recently installed quantum espresso v4.3.2 in my home directory 
>> at an
>> ?external supercomputer cluster.
>> ?The way I did this was to execute the following commands:
>>
>> ?./configure
>> ?make all
>>
>> ?after first having loaded the the mpi environment, the fortran and 
>> C
>> ?compiler with the following commands:
>> ?module load openmpi
>> ?module load g95/093
>> ?module load gcc
>>
>> ?./configure was successful and make seemed to finish normally (at 
>> least
>> ?I did not get any error message).
>>
>> ?So far I have only been using the pw.x and neb.x executables.
>> ?In a file named "slurm-jobID.out" that is generated by the queuing
>> ?system, I get the following message when running both pw.x and 
>> neb.x:
>>
>> ?mca: base: component_find: unable to open
>> ?/site/VERSIONS/openmpi-1.3.3.gnu/lib/openmpi/mca_mtl_psm: perhaps a
>> ?missing symbol, or compiled for a different version of Open MPI?
>> ?(ignored)
>>
>> ?This message seems rather clear, but I am not sure how relevant it 
>> is
>> ?because pw.x runs without problem on 64 processors (I have compared 
>> the
>> ?output with that generated on another machine). neb.x on the other 
>> hand
>> ?works when running on a single processor, but fails when running in
>> ?parallel (yes, I have used the -inp option).
>>
>> ?The output of the neb calculation is only 13 lines and the last 
>> three
>> ?lines are
>>
>> ? ? ?Parallel version (MPI), running on ? ?16 processors
>> ? ? ?path-images division: ?nimage ? ?= ? 10
>> ? ? ?R & G space division: ?proc/pool = ? 16
>>
>>
>> ?In the output files out.n_0 where n={1,9} the error message
>>
>> ? ? ?Message from routine ?read_line :
>> ? ? ?read error
>>
>> ?is repeated several thousand times.
>>
>>
>> ?I have a feeling that there is something I have got wrong with the
>> ?parallel environment. If I (accidentally) compiled QE for a 
>> different
>> ?openmpi version than 1.3.3.gnu, It would be interesting to know 
>> which
>> ?one. Does anyone have an idea on how I can check this?
>>
>> ?In case the cause of the problem is a different one, it would be 
>> nice
>> ?if someone had any suggestions on how to solve it.
>
> this sounds a lot like one of the nodes that you are using
> has a network problem and you are trying to read from
> an NFS exported directory, but only i/o errors. the OpenMPI
> based error supports this. at least, i have only seen this
> kind of error when one of the nodes in a parallel job had
> to be rebooted hard because of a an obscure and rarely
> triggered bug in the ethernet driver.
>
> you should see, if this happens always or only if there
> is one specific node that is assigned to your job.
> i would also talk to the sysadmin of the machine.
>
> HTH,
> axel.
>
>
>>
>> ?Thank you very much in advance.
>>
>> ?Yours sincerely,
>>
>> ?Torstein Fjermestad
>> ?University of Oslo,
>> ?Norway
>>
>>
>>
>>
>>
>>
>>
>> ___
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://www.democritos.it/mailman/listinfo/pw_forum



[Pw_forum] problem with neb calculations / openmpi

2012-03-14 Thread Torstein Fjermestad
 Dear all,

 I recently installed quantum espresso v4.3.2 in my home directory at an 
 external supercomputer cluster.
 The way I did this was to execute the following commands:

 ./configure
 make all

 after first having loaded the the mpi environment, the fortran and C 
 compiler with the following commands:
 module load openmpi
 module load g95/093
 module load gcc

 ./configure was successful and make seemed to finish normally (at least 
 I did not get any error message).

 So far I have only been using the pw.x and neb.x executables.
 In a file named "slurm-jobID.out" that is generated by the queuing 
 system, I get the following message when running both pw.x and neb.x:

 mca: base: component_find: unable to open 
 /site/VERSIONS/openmpi-1.3.3.gnu/lib/openmpi/mca_mtl_psm: perhaps a 
 missing symbol, or compiled for a different version of Open MPI? 
 (ignored)

 This message seems rather clear, but I am not sure how relevant it is 
 because pw.x runs without problem on 64 processors (I have compared the 
 output with that generated on another machine). neb.x on the other hand 
 works when running on a single processor, but fails when running in 
 parallel (yes, I have used the -inp option).

 The output of the neb calculation is only 13 lines and the last three 
 lines are

  Parallel version (MPI), running on16 processors
  path-images division:  nimage=   10
  R & G space division:  proc/pool =   16


 In the output files out.n_0 where n={1,9} the error message

  Message from routine  read_line :
  read error

 is repeated several thousand times.


 I have a feeling that there is something I have got wrong with the 
 parallel environment. If I (accidentally) compiled QE for a different 
 openmpi version than 1.3.3.gnu, It would be interesting to know which 
 one. Does anyone have an idea on how I can check this?

 In case the cause of the problem is a different one, it would be nice 
 if someone had any suggestions on how to solve it.

 Thank you very much in advance.

 Yours sincerely,

 Torstein Fjermestad
 University of Oslo,
 Norway

 


 
 



[Pw_forum] NEB in QE 4.3: NaN values of some of the initially generated images

2012-01-27 Thread Torstein Fjermestad
 Dear Prof. Giannozzi,

 Thanks for your suggestion.
 Using version 4.3.2 of Quantum Espresso solved the problem I had 
 generating the initial NEB images.
 With this e-mail I do not only want to thank you, but also make people 
 aware of the need to update to v 4.3.2 in case they encounter a similar 
 problem in the future. This is why I send this e-mail to pw_forum.

 Yours sincerely

 Torstein Fjermestad
 University of Oslo
 Norway
 
 

 On Tue, 17 Jan 2012 19:17:52 +0100, Paolo Giannozzi 
  wrote:
> On Jan 17, 2012, at 18:51 , Torstein Fjermestad wrote:
>
>>   Does anyone have further suggestions on how to solve this problem?
>
> first of all, please try v.4.3.2; it contains a few fixes for NEB wrt
> v.4.3
> === Doc/release-notes
> Fixed in 4.3.2 version
> [...]
> * NEB: possible problem in parallel execution (if command-line
> arguments
>   are not available to all processors) avoided by broadcasting
> arguments
> [...]
> Fixed in 4.3.1 version
> [...]
> * NEB + nonlocal exchange (DF-vdW) or hybrid functionals wasn't
> working
> * NEB: incorrect parsing of intermediate images fixed
> ===
>
> P.
> ---
> Paolo Giannozzi, Dept of Chemistry,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
>
>
>
>
> ___
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum



[Pw_forum] NEB in QE 4.3: NaN values of some of the initially generated images

2012-01-17 Thread Torstein Fjermestad
 Dear Layla,

 thanks for your suggestion.

 Unfortunately setting nstep_path=1 did not solve the problem. Image 3 
 and 4 of the .path file contained the NaN values just as before.

 In the output file of the calculation it says nearly at the top:

  initial path length   = NaN bohr
  initial inter-image distance  = NaN bohr

 To me this result seems fairly obvious. Because two of the initial 
 images contain NaN values, there is no way in which the program can 
 calculate the initial path length.
 

 Some lines further down it says:

-- iteration   1 
 --

  tcpu =  3.0self-consistency for image   1
  tcpu =129.3self-consistency for image   2

  
 %%
  from coset : error # 1
  nsym == 0
  
 %%

  stopping ...


 The way I interpret this information is that the program manages to 
 finish the SCF cycle for image 1 and 2, but when it comes to image 3 it 
 fails because that structure consists only of NaN values.

 The origin of the problem seems to lay in the generation of the initial 
 images.
 Does anyone have further suggestions on how to solve this problem?

 Thank you very much in advance.

 Yours sincerely

 Torstein Fjermestad

 


 On Tue, 17 Jan 2012 15:18:25 +0100, Layla Martin-Samos 
  wrote:
> Dear Torstein, nstep_path=0 produces a "strange behavior" as NEB
> starts counting at 1. So if you just set nstep_path=1 it should work.
>
> bests
>
> Layla
>
> 2012/1/16 Torstein Fjermestad
>  Dear all,
>
>  I have recently made several attempts to submit a NEB calculation
> using Quantum Espresso version 4.3.
>  Unfortunately, every attempt fails with the program printing NaN
> values instead of the Cartesian coordinates for some of the initial
> images. For instance, in the .path file corresponding to the attached
> input file, image 3 and 4 (num_of_images=5) consists entirely of NaN
> values instead of real Cartesian coordinates. Of cause, if some of 
> the
> initial images consist only of NaN values the calculation has no
> chance of continuing.
>
>  There have been significant changes in the way to submit a NEB
> calculation between version 4.2.1 and version 4.3, and to test 
> whether
> the same behavior would occur in version 4.2.1, I submitted a NEB
> calculation with QE version 4.2.1 with exactly the same input
> structures. In that case the program had no problem in generating the
> initial images. Because of this I think we can exclude the 
> possibility
> of the error being caused by the input coordinates themselves.
>
>  In the script I used to submit the calculation, the line to run the
> neb.x executable is the following:
>
>  mpirun -np 16 -npernode 8 neb.x -inp neb_11.inp > neb_11.out
>
>  Have any of you come across a similar problem before?
>  Does anyone have suggestions on how to prevent the NaN values from
> appearing?
>
>  Thanks in advance.
>
>  Yours sincerely
>
>  Torstein Fjermestad
>  University of Oslo,
>  Norway.
>
> ___
>  Pw_forum mailing list
>  Pw_forum at pwscf.org [2]
>  http://www.democritos.it/mailman/listinfo/pw_forum [3]
>
>
>
> Links:
> --
> [1] mailto:torstein.fjermestad at kjemi.uio.no
> [2] mailto:Pw_forum at pwscf.org
> [3] http://www.democritos.it/mailman/listinfo/pw_forum



[Pw_forum] NEB in QE 4.3: NaN values of some of the initially generated images

2012-01-16 Thread Torstein Fjermestad
 Dear all,

 I have recently made several attempts to submit a NEB calculation using 
 Quantum Espresso version 4.3.
 Unfortunately, every attempt fails with the program printing NaN values 
 instead of the Cartesian coordinates for some of the initial images. For 
 instance, in the .path file corresponding to the attached input file, 
 image 3 and 4 (num_of_images=5) consists entirely of NaN values instead 
 of real Cartesian coordinates. Of cause, if some of the initial images 
 consist only of NaN values the calculation has no chance of continuing.

 There have been significant changes in the way to submit a NEB 
 calculation between version 4.2.1 and version 4.3, and to test whether 
 the same behavior would occur in version 4.2.1, I submitted a NEB 
 calculation with QE version 4.2.1 with exactly the same input 
 structures. In that case the program had no problem in generating the 
 initial images. Because of this I think we can exclude the possibility 
 of the error being caused by the input coordinates themselves.
 
 In the script I used to submit the calculation, the line to run the 
 neb.x executable is the following:

 mpirun -np 16 -npernode 8 neb.x -inp neb_11.inp > neb_11.out


 Have any of you come across a similar problem before?
 Does anyone have suggestions on how to prevent the NaN values from 
 appearing?

 Thanks in advance.

 Yours sincerely

 Torstein Fjermestad
 University of Oslo,
 Norway.

 




-- next part --
A non-text attachment was scrubbed...
Name: neb_11.inp
Type: application/octet-stream
Size: 13055 bytes
Desc: not available
Url : 
http://www.democritos.it/pipermail/pw_forum/attachments/20120116/fd5abaf6/attachment.obj