Your mail is too big. Here an excerpt and some reply:

How did you modify   optimize.job  ?  What is your run_lapw line ?

Probably here is the error. You said:

I calculated the stress tensor (-pres 0.1) ...

-pres 0.1  is not a valid option for run_lapw. It should be -str 0.1

Did you read the comments in UG or the update section on the web about the stress tensor ? It works "similar" to the forces (-fc 1). Only when the partial pressure is converged, the additional terms in lapw2 are calculated , giving the total tensor. The partial tensor is "meaningless", i.e. don't worry about its values.
Remember: The additional term in lapw2 will take quite some cpu time.

If you see ***INFO in the :ENE line, you should also grep for :INFO. Most likely not crucual.

PS: I saw that you have just 5 neq atoms in the cell. Such cells I run usually on a simple PC and even there it does NOT need 5 minutes. What is your :RKM ? For a small matrix size using 64 cores in mpi (I hope you compiled with ELPA) it may be slower than sequential or mpi with less cores. Remember: More cores does not necessarily mean that it runs faster - in fact it can also run MUCH SLOWER !


-----------------------------------------------
Thank you Peter and Mark for your responses.

I have checked the structure, it looks ok and the ‘a’ parameter (cubic) in the struct file seems to agree with the experimental one (19.5 Bohr versus 19.6), but when I calculated the stress tensor (-pres 0.1) at the end of the first SCF I got -26474. GPa. Strange such a large value…

The initial SCF went fine, but I have not tried for other volumes.

As you suggested, I have run again the sequence:
x dstart
optimize.job

but I get the same result.

---
To be complete, I split the 128-cores node (1 node=2 processors, 64 cores each) into two for k-points parallelization; I get 2 files: case.klist_1 and case.klist_2. Here is the .machines content:
# OMP parallelization
omp_global:1
#omp_lapw1:1
#omp_lapw2:1
#omp_lapwso:1
#omp_dstart:1
#omp_sumpara:1
#omp_nlvdw:1
# k-point parallelization for lapw1/2 hf lapwso qtl irrep  nmr  optic
1:irene4046:64
1:irene4046:64
# MPI parallelization for dstart lapw0 nlvdw
dstart: irene4046:6
lapw0: irene4046:6
nlvdw: irene4046:6
granularity:1
extrafine:1
-----

Result for :NEC01:
:NEC01: NUCLEAR AND ELECTRONIC CHARGE    760.00000   759.93461
:NEC01: NUCLEAR AND ELECTRONIC CHARGE    760.00000   759.93464
:NEC01: NUCLEAR AND ELECTRONIC CHARGE    760.00000   759.93542
:NEC01: NUCLEAR AND ELECTRONIC CHARGE    760.00000   759.93483

For :ENE:
:ENE  : *INFO***** TOTAL ENERGY IN Ry =      -100251.28453414
:ENE  : *INFO***** TOTAL ENERGY IN Ry =      -100251.20603572
:ENE  : *INFO***** TOTAL ENERGY IN Ry =      -100249.25329142
:ENE  : *INFO***** TOTAL ENERGY IN Ry =      -100252.25418212

Nothing for :WAR



For the output of mixing, case.outputm:
:NEC01: NUCLEAR AND ELECTRONIC CHARGE    760.00000   759.93483
:OTO   : INTERSTITIAL CHARGE  =    71.485237

:NEC02: NUCLEAR AND ELECTRONIC CHARGE    760.00000   761.00379958
:MIX  :   PRATT  REGULARIZATION:  2.00E-04 GREED: 0.00100
:CTO   : INTERSTITIAL CHARGE  =    70.478002

:NEC03: NUCLEAR AND ELECTRONIC CHARGE    760.00000   760.00000000

:ENE  : *INFO***** TOTAL ENERGY IN Ry =      -100252.25418212

:STRESS_GPa001:    246361.85088        0.00000        0.00000   partial
:STRESS_GPa002:         0.00000   246361.85088        0.00000   partial
:STRESS_GPa003:         0.00000        0.00000   246361.85088   partial

:PRESSURE:        246361.85088 GPa     partial

:FOR001: 1.ATOM 0.000 0.000 0.000 0.000 partial forces :FOR002: 2.ATOM 3.326 0.000 0.000 -3.326 partial forces


_______________________________________________
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html

Reply via email to