[Wien] lapw2 error

2014-10-29 Thread Wanxiang Feng
Dear Prof. Blaha,I used Wien2k_14.2 to calculate the electronic structure of LaSbTe3 with the spin polarized case and SOC. Thestandardflow is:init_lapw -b -sp -numk 2500runsp_lapw -pinitso_lapwrunsp_lapw -p -soThe “runsp_lapw -p" can normallyfinish, but the "runsp_lapw -p -so" always gives a error “L2main - QTL-B Error”.I have searched the mailing list and tried many times to adjust the linear energy of every atom, but I never succeeded.Attachment is the structure file, could you help me to find out the reason?Thanks in advance.W. Feng

LaSbTe3.struct
Description: Binary data
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] RKmax related errors in parallel OPTIC program

2011-12-22 Thread wanxiang feng
Dear Prof. Blaha

There are some RKmax related errors in parallel OPTIC
program(WIEN2k_11.1). My system is monolayer MoS2 with slab model, I
just want to calculate the momentum operator elements by using OPTIC
program. If I do serial calculation and set RKmax = 7 or 9 in
case.in1c,  all will be OK. If I do parallel calculation, RKmax = 7 is
OK, but RKmax=9 gives me an error.

After obtained the converged ground state, my calculation flow is:

1) serial calculation

x lapw0
x lapw1 -c -up
x lapw1 -c -dn
x lapwso -c -up
x optic -c -so -up

RKmax = 7 or 9 are all OK!

2) parallel calculation (k-point parallel)

x lapw0
x lapw1 -c -up -p
x lapw1 -c -dn -p
x lapwso -c -up -p
x optic -c -so -up -p

RKmax = 7 is OK, but RKmax = 9 is ERROR!

The error information:
---
running OPTIC in parallel mode
[1] 24639
[2] 24833
[3] 25027
[4] 25221
[5] 25415
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image  PCRoutineLineSource
opticc 00423937  planew_   164  planew_tmp.f
opticc 0043247F  mom_mat_  588  sph-UP_tmp.f
opticc 0041D660  MAIN__447  opmain.f
opticc 004035EC  Unknown   Unknown  Unknown
libc.so.6  00366FC1D994  Unknown   Unknown  Unknown
opticc 004034F9  Unknown   Unknown  Unknown
---

I attached structure file and all input files, could you kindly help
me to test it and find out the exact reason?

Thanks in advance!

Feng
-- next part --
A non-text attachment was scrubbed...
Name: MoS2-monolayer.struct
Type: application/octet-stream
Size: 1891 bytes
Desc: not available
URL: 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20111222/7a06f79c/attachment.dll
-- next part --
A non-text attachment was scrubbed...
Name: MoS2-monolayer.in0
Type: application/octet-stream
Size: 152 bytes
Desc: not available
URL: 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20111222/7a06f79c/attachment-0001.dll
-- next part --
A non-text attachment was scrubbed...
Name: MoS2-monolayer.in1c
Type: application/octet-stream
Size: 555 bytes
Desc: not available
URL: 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20111222/7a06f79c/attachment-0002.dll
-- next part --
A non-text attachment was scrubbed...
Name: MoS2-monolayer.inso
Type: application/octet-stream
Size: 265 bytes
Desc: not available
URL: 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20111222/7a06f79c/attachment-0003.dll
-- next part --
A non-text attachment was scrubbed...
Name: MoS2-monolayer.inop
Type: application/octet-stream
Size: 427 bytes
Desc: not available
URL: 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20111222/7a06f79c/attachment-0004.dll


[Wien] Fwd: RKmax related errors in parallel OPTIC program

2011-12-22 Thread wanxiang feng
Dear Prof. Blaha

There are some RKmax related errors in parallel OPTIC
program(WIEN2k_11.1). My system is monolayer MoS2 with slab model, I
just want to calculate the momentum operator elements by using OPTIC
program. If I do serial calculation and set RKmax = 7 or 9 in
case.in1c, ?all will be OK. If I do parallel calculation, RKmax = 7 is
OK, but RKmax=9 gives me an error.

After obtained the converged ground state(k-mesh is 16x16x1), my
calculation flow is:

1) serial calculation

x lapw0
x lapw1 -c -up
x lapw1 -c -dn
x lapwso -c -up
x optic -c -so -up

RKmax = 7 or 9 are all OK!

2) parallel calculation (k-point parallel)

x lapw0
x lapw1 -c -up -p
x lapw1 -c -dn -p
x lapwso -c -up -p
x optic -c -so -up -p

RKmax = 7 is OK, but RKmax = 9 is ERROR!

The error information:
---
running OPTIC in parallel mode
[1] 24639
[2] 24833
[3] 25027
[4] 25221
[5] 25415
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image ? ? ? ? ? ? ?PC ? ? ? ? ? ? ? ?Routine ? ? ? ? ? ?Line ? ? ? ?Source
opticc ? ? ? ? ? ? 00423937 ?planew_ ? ? ? ? ? ? ? ? ? 164 ?planew_tmp.f
opticc ? ? ? ? ? ? 0043247F ?mom_mat_ ? ? ? ? ? ? ? ? ?588 ?sph-UP_tmp.f
opticc ? ? ? ? ? ? 0041D660 ?MAIN__ ? ? ? ? ? ? ? ? ? ?447 ?opmain.f
opticc ? ? ? ? ? ? 004035EC ?Unknown ? ? ? ? ? ? ? Unknown ?Unknown
libc.so.6 ? ? ? ? ?00366FC1D994 ?Unknown ? ? ? ? ? ? ? Unknown ?Unknown
opticc ? ? ? ? ? ? 004034F9 ?Unknown ? ? ? ? ? ? ? Unknown ?Unknown
---

I attached structure file and all input files, could you kindly help
me to test it and find out the exact reason?

Thanks in advance!

Feng
-- next part --
A non-text attachment was scrubbed...
Name: MoS2-monolayer.struct
Type: application/octet-stream
Size: 1891 bytes
Desc: not available
URL: 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20111222/6c56eee7/attachment.dll
-- next part --
A non-text attachment was scrubbed...
Name: MoS2-monolayer.in0
Type: application/octet-stream
Size: 152 bytes
Desc: not available
URL: 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20111222/6c56eee7/attachment-0001.dll
-- next part --
A non-text attachment was scrubbed...
Name: MoS2-monolayer.in1c
Type: application/octet-stream
Size: 555 bytes
Desc: not available
URL: 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20111222/6c56eee7/attachment-0002.dll
-- next part --
A non-text attachment was scrubbed...
Name: MoS2-monolayer.inso
Type: application/octet-stream
Size: 265 bytes
Desc: not available
URL: 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20111222/6c56eee7/attachment-0003.dll
-- next part --
A non-text attachment was scrubbed...
Name: MoS2-monolayer.inop
Type: application/octet-stream
Size: 427 bytes
Desc: not available
URL: 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20111222/6c56eee7/attachment-0004.dll
-- next part --
A non-text attachment was scrubbed...
Name: MoS2-monolayer.klist
Type: application/octet-stream
Size: 2900 bytes
Desc: not available
URL: 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20111222/6c56eee7/attachment-0005.dll


[Wien] mBJ + U ?

2010-07-21 Thread wanxiang feng
Dear Tran and Blaha,

On theoretical background and practical program code, is it possible
to do scf and bandstructure calculation using mBJ+U(just like the
LDA+U) for some strong correlation system?
If it is feasible, how to determine the U value, is it the same as the
U in LDA+U scheme? Could you give me some literatures about this?

Thank you,

Feng,


[Wien] a parallel error of lapw0 with MBJLDA potential (updated)

2010-06-20 Thread wanxiang feng
Unfortunately, lapw0 can't handle the fcc Th (Thorium), the endless
loop is still in brj.f .
I attempt to simply adjust the q, but no effect.

Thanks,

feng

2010/6/15 wanxiang feng fengwanxiang at gmail.com:
 All results became reasonable,

 Thanks for your help!

 Feng


 2010/6/14 Peter Blaha pblaha at theochem.tuwien.ac.at:
 1. I do not fully understand what you mean that It is probably
 completely uncritical for the gap,
 After the temporary fixed in brj.f, can the code deal with correctly
 the system which have very heavy elements?
 ?and their bandgaps are reasonable?

 Yes.

 2. We doubt that there are still some bugs in lapw0_mpi, because the
 bandgap is different,

 Ge:
 0.85eV ?(lapw0)
 0.71eV ?(lapw0_mpi)

 There was still a bug in the interstitial region in case you have more
 processors than atoms.
 It has been fixed and the new version is on the web.

 PS: This new version includes also improved W2kutil and W2kinit subroutines,
 which also compile under Sun Solaris.
 --

 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?P.Blaha
 --
 Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
 Phone: +43-1-58801-15671 ? ? ? ? ? ? FAX: +43-1-58801-15698
 Email: blaha at theochem.tuwien.ac.at ? ?WWW:
 http://info.tuwien.ac.at/theochem/
 --
 ___
 Wien mailing list
 Wien at zeus.theochem.tuwien.ac.at
 http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien




[Wien] a parallel error of lapw0 with MBJLDA potential (updated)

2010-06-13 Thread wanxiang feng
Dear prof. Blaha

1. I do not fully understand what you mean that It is probably
completely uncritical for the gap,
After the temporary fixed in brj.f, can the code deal with correctly
the system which have very heavy elements?
 and their bandgaps are reasonable?


2. We doubt that there are still some bugs in lapw0_mpi, because the
bandgap is different,

Ge:
0.85eV  (lapw0)
0.71eV  (lapw0_mpi)

GaAs:
1.61eV  (lapw0)
1.37eV  (lapw0_mpi)

Further more, for the first case we provided (test.struct), lapw0_mpi
may lead to unphysical results.
There is a fourfold-degenerate state at about 0.1eV above the fermi
energy from GGA and MBJLDA(using lapw0) results.
This fourfold-degenerate state is protected by symmetry, but it's
absent from MBJLDA result when using lapw0_mpi.


We expect a more reliable version of lapw0 and lapw0_mpi.


Sincerely yours,

Wanxiang Feng



2010/6/11 Peter Blaha pblaha at theochem.tuwien.ac.at:
 We will have to make a more detailed analysis.
 Apparently the problem is only near the nucleus for very heavy elements,
 where
 tau or g2rho take values of + or - 10**15 because of diverging subterms.
 It is probably completely uncritical for the gap,...

 A temporary fix can be made in brj.f just before the ? do while loop
 add the following lines:

 tauw = 0.125d0*grho*grho*2.d0/rho
 if(tau.lt.tauw) ?tau=tauw
 ? ? ? ? D = TAU - 0.25D0*GRHO**2D0/RHO
 ? ? ? ? Q = (1D0/6D0)*(G2RHO - 2D0*0.8D0*D)
 if(tau.eq.tauw .and. q.lt.-1.d9) ? q=-1.d9 ? ! eventually experiment with
 the value of q

 ? 10 ? ?DO WHILE (DABS(F) .GE. TOL)

 We will check if q has some physical bound which could be used as better
 estimate.



[Wien] New exchange-correlation potential

2010-06-11 Thread wanxiang feng
I'm not sure about your first step: For the first scf run, I have set
NR2V and indxc=5 in GaAs.in0;
It should set indxc=13 (GGA PBE potential) according to userguide, is't right?


 Hello,
 I have got a band gap of 0.8 eV, with lattice constant 5.6533 Angstrom and
 the
 space group No. is 216, with two atoms in (0.0 0.0 0.0) and (0.25 0.25
 0.25).
 I have checked that:
 1. In GaAs.inm_vresp, YES has been replaced by NO;
 2. I have done the calculation as following steps:
 For the first scf run, I have set NR2V and indxc=5 in GaAs.in0;
 Change NR2V to R2V and run one more scf cycle;
 save lda;
 Set indxc=28 in GaAs.in0;
 cp GaAs.in0 GaAs.in0_grr and set index=50 in GaAs.in0_grr
 run_lapw -p -cc 0.1
 plot bandstructure
 Anything wrong with my procedure?
 yonghong
 On 2010?06?10? 05:06, F. Tran wrote:



[Wien] a parallel error of lapw0 with MBJLDA potential (updated)

2010-06-11 Thread wanxiang feng
Thanks for your timely reply!

I known that lapw0_mpi parallel will not speed up the small system,
like GaAs. It's just a test case before we calculate some larger
system.

Now, The code can deal with the lapw0 parallel of GaAs correctly, but,
another problem arised when we calculate some larger system(3 or 8
inequivalent atoms in primitive cell)!

The calcultion can not proceed normally at the second call of lapw0
whether or not use the parallel of lapw0.

The job will not stop, and the lapw0 (or lapw0_mpi) run without any
error infomation, but lapw0 (or lapw0_mpi) will not done after a long
long time.

 case.dayfile
===

start   (Fri Jun 11 00:08:00 CST 2010) with lapw0 (1/99 to go)

cycle 1 (Fri Jun 11 00:08:00 CST 2010)  (1/99 to go)

   lapw0 -grr -p   (00:08:00) starting parallel lapw0 at Fri Jun 11 
 00:08:00 CST 2010
 .machine0 : 16 processors
0.824u 0.444s 0:10.82 11.6% 0+0k 0+0io 0pf+0w
   lapw0 -p(00:08:11) starting parallel lapw0 at Fri Jun 11 00:08:11 CST 
 2010
 .machine0 : 16 processors





=

It seems that the code can't handle the system which contains more
than two inequivalent atoms. We doubt there are still some bugs in
lapw0 about MBJLDA potential.

The attachment could be used as a test example.


Thanks,

Feng.



2010/6/10 Peter Blaha pblaha at theochem.tuwien.ac.at:
 Thank's for the report. I could verify the problem with the mpi-parallel
 version for mBJ and a corrected version is on the web for download.

 HOWEVER: Please be aware, that ? lapw0_mpi ?parallelizes (mainly) over the
 atoms. Thus for GaAs I do not expect any speedup by using more than 2
 processors.

 Furthermore: Do NOT blindly use a parallel calculations. For these small
 systems a sequential calculation (maybe with OMP_NUM_THREAD set to 2) might
 be FASTER than a 8 or more fold parallel calculation. (parallel overhead,
 disk I/O, summary steps, slower memory access, ...)
 Always compare the real timings of lapw0/1/2 in the dayfiles of a
 sequential
 and parallel calculation.

-- next part --
bleblebles-o calc. M||  0.00  0.00  1.00   
F3  216
 RELA  
 12.425894 12.425894 12.425894 90.00 90.00 90.00   
ATOM  -1: X=0. Y=0. Z=0.
  MULT= 1  ISPLIT=-2
Bi NPT=  781  R0=.05000 RMT=   2.5   Z:  83.0  
LOCAL ROT MATRIX:1.000 0.000 0.000
 0.000 1.000 0.000
 0.000 0.000 1.000
ATOM  -2: X=0.2500 Y=0.2500 Z=0.2500
  MULT= 1  ISPLIT=-2
Pt NPT=  781  R0=.05000 RMT=   2.5   Z:  78.0  
LOCAL ROT MATRIX:1.000 0.000 0.000
 0.000 1.000 0.000
 0.000 0.000 1.000
ATOM  -3: X=0.5000 Y=0. Z=0.
  MULT= 1  ISPLIT=-2
Lu NPT=  781  R0=.1 RMT=   2.5   Z:  71.0  
LOCAL ROT MATRIX:1.000 0.000 0.000
 0.000 1.000 0.000
 0.000 0.000 1.000
   8  NUMBER OF SYMMETRY OPERATIONS
 0 1 0 0.000
-1 0 0 0.000
 0 0-1 0.000
   1   A   3 so. oper.  type  orig. index
 0-1 0 0.000
 1 0 0 0.000
 0 0-1 0.000
   2   A   7
-1 0 0 0.000
 0-1 0 0.000
 0 0 1 0.000
   3   A  16
 1 0 0 0.000
 0 1 0 0.000
 0 0 1 0.000
   4   A  24
 1 0 0 0.000
 0-1 0 0.000
 0 0-1 0.000
   5   B   1
-1 0 0 0.000
 0 1 0 0.000
 0 0-1 0.000
   6   B   9
 0 1 0 0.000
 1 0 0 0.000
 0 0 1 0.000
   7   B  18
 0-1 0 0.000
-1 0 0 0.000
 0 0 1 0.000
   8   B  22
-- next part --
A non-text attachment was scrubbed...
Name: ouput0.rar
Type: application/octet-stream
Size: 25203 bytes
Desc: not available
URL: 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20100611/b81dd720/attachment.dll


[Wien] a parallel error of lapw0 with MBJLDA potential (updated)

2010-06-11 Thread wanxiang feng
It seems that there is an endless loop in brj.f

===
   10DO WHILE (DABS(F) .GE. TOL)
.
..
 ENDDO

 IF (X .LT. 0D0) THEN
  
  .
 ENDIF
===

Under our own test, another case with the same structure
(test2.struct) will not meet with this situation.
This problem is some delicate, and ask for your help.
Note: we perform the spin-polarized calculation plus spin-orbit
interaction about these cases,  runsp_lapw -so -p .


Thanks

feng



2010/6/11 wanxiang feng fengwanxiang at gmail.com:
 Thanks for your timely reply!

 I known that lapw0_mpi parallel will not speed up the small system,
 like GaAs. It's just a test case before we calculate some larger
 system.

 Now, The code can deal with the lapw0 parallel of GaAs correctly, but,
 another problem arised when we calculate some larger system(3 or 8
 inequivalent atoms in primitive cell)!

 The calcultion can not proceed normally at the second call of lapw0
 whether or not use the parallel of lapw0.

 The job will not stop, and the lapw0 (or lapw0_mpi) run without any
 error infomation, but lapw0 (or lapw0_mpi) will not done after a long
 long time.

  case.dayfile
 ===

 ? ?start ? ? ? (Fri Jun 11 00:08:00 CST 2010) with lapw0 (1/99 to go)

 ? ?cycle 1 ? ? (Fri Jun 11 00:08:00 CST 2010) ?(1/99 to go)

 ? lapw0 -grr -p ? ? ? (00:08:00) starting parallel lapw0 at Fri Jun 11 
 00:08:00 CST 2010
  .machine0 : 16 processors
 0.824u 0.444s 0:10.82 11.6% ? ? 0+0k 0+0io 0pf+0w
 ? lapw0 -p ? ?(00:08:11) starting parallel lapw0 at Fri Jun 11 00:08:11 CST 
 2010
  .machine0 : 16 processors





 =

 It seems that the code can't handle the system which contains more
 than two inequivalent atoms. We doubt there are still some bugs in
 lapw0 about MBJLDA potential.

 The attachment could be used as a test example.


 Thanks,

 Feng.



 2010/6/10 Peter Blaha pblaha at theochem.tuwien.ac.at:
 Thank's for the report. I could verify the problem with the mpi-parallel
 version for mBJ and a corrected version is on the web for download.

 HOWEVER: Please be aware, that ? lapw0_mpi ?parallelizes (mainly) over the
 atoms. Thus for GaAs I do not expect any speedup by using more than 2
 processors.

 Furthermore: Do NOT blindly use a parallel calculations. For these small
 systems a sequential calculation (maybe with OMP_NUM_THREAD set to 2) might
 be FASTER than a 8 or more fold parallel calculation. (parallel overhead,
 disk I/O, summary steps, slower memory access, ...)
 Always compare the real timings of lapw0/1/2 in the dayfiles of a
 sequential
 and parallel calculation.


-- next part --
bleblebles-o calc. M||  0.00  0.00  1.00   
F3  216
 RELA  
 11.243870 11.243870 11.243870 90.00 90.00 90.00   
ATOM  -1: X=0.5000 Y=0. Z=0.
  MULT= 1  ISPLIT=-2
Ti NPT=  781  R0=.5 RMT=   2.42000   Z:  22.0  
LOCAL ROT MATRIX:1.000 0.000 0.000
 0.000 1.000 0.000
 0.000 0.000 1.000
ATOM  -2: X=0.2500 Y=0.2500 Z=0.2500
  MULT= 1  ISPLIT=-2
Ni NPT=  781  R0=.5 RMT=   2.42000   Z:  28.0  
LOCAL ROT MATRIX:1.000 0.000 0.000
 0.000 1.000 0.000
 0.000 0.000 1.000
ATOM  -3: X=0. Y=0. Z=0.
  MULT= 1  ISPLIT=-2
Sn NPT=  781  R0=.1 RMT=   2.27000   Z:  50.0  
LOCAL ROT MATRIX:1.000 0.000 0.000
 0.000 1.000 0.000
 0.000 0.000 1.000
   8  NUMBER OF SYMMETRY OPERATIONS
 0 1 0 0.000
-1 0 0 0.000
 0 0-1 0.000
   1   A   3 so. oper.  type  orig. index
 0-1 0 0.000
 1 0 0 0.000
 0 0-1 0.000
   2   A   7
-1 0 0 0.000
 0-1 0 0.000
 0 0 1 0.000
   3   A  16
 1 0 0 0.000
 0 1 0 0.000
 0 0 1 0.000
   4   A  24
 1 0 0 0.000
 0-1 0 0.000
 0 0-1 0.000
   5   B   1
-1 0 0 0.000
 0 1 0 0.000
 0 0-1 0.000
   6   B   9
 0 1 0 0.000
 1 0 0 0.000
 0 0 1 0.000
   7   B  18
 0-1 0 0.000
-1 0 0 0.000
 0 0 1 0.000
   8   B  22


[Wien] a parallel error of lapw0 with MBJLDA potential

2010-06-10 Thread wanxiang feng
Honorable Professor Blaha,

I calculate GaAs and Ge with MBJLDA potential following the steps in
section 4.5.8 of userguide.
There is no problem about Ge, and the calculated energy gap is 0.85eV.
but lapw0 crashed (in the last step, run another scf cycle in the
sec. 4.5.8) when I calculate GaAs, the error files are:

=== GaAs.dayfile
===

start   (Thu Jun 10 00:03:22 CST 2010) with lapw0 (40/99 to go)

cycle 1 (Thu Jun 10 00:03:22 CST 2010)  (40/99 to go)

   lapw0 -grr -p   (00:03:22) starting parallel lapw0 at Thu Jun 10 
 00:03:23 CST 2010
 .machine0 : 8 processors
1.522u 0.702s 0:07.17 30.9% 0+0k 0+0io 0pf+0w
   lapw0 -p(00:03:30) starting parallel lapw0 at Thu Jun 10 00:03:30 CST 
 2010
 .machine0 : 8 processors
rm_l_1_8923: (1.867188) net_send: could not write to fd=6, errno = 9
rm_l_1_8923:  p4_error: net_send write: -1
p4_6838:  p4_error: net_recv read:  probable EOF on socket: 1
rm_l_4_6885: (1.066406) net_send: could not write to fd=5, errno = 32
p5_6889:  p4_error: net_recv read:  probable EOF on socket: 1
rm_l_5_6936: (0.804688) net_send: could not write to fd=5, errno = 32
p7_6991:  p4_error: net_recv read:  probable EOF on socket: 1
rm_l_7_7038: (0.281250) net_send: could not write to fd=5, errno = 32
p6_6940:  p4_error: net_recv read:  probable EOF on socket: 1
rm_l_6_6987: (0.546875) net_send: could not write to fd=5, errno = 32
p2_8929:  p4_error: net_recv read:  probable EOF on socket: 1
rm_l_2_8977: (1.597656) net_send: could not write to fd=5, errno = 32
p3_8983:  p4_error: net_recv read:  probable EOF on socket: 1
rm_l_3_9030: (1.332031) net_send: could not write to fd=5, errno = 32
**  lapw0 crashed!
0.214u 0.256s 0:03.69 12.4% 0+0k 0+0io 0pf+0w
error: command   /home/wxfeng/apps/WIEN2k_10.1/lapw0para -c lapw0.def   failed

   stop error


 lapw0.error
=

**  Error in Parallel lapw0
**  lapw0 STOPPED at Thu Jun 10 00:03:33 CST 2010
**  check ERROR FILES!


= standard output
=

 LAPW0 END
 LAPW0 END
 LAPW0 END
 LAPW0 END
 LAPW0 END
 LAPW0 END
 LAPW0 END
 LAPW0 END
forrtl: severe (104): incorrect POSITION= specifier value for
connected file, unit 11, file /pub/wxfeng/WIEN2k/GaAs/GaAs.r2v
Image  PCRoutineLine
Source
lapw0_mpi  005A6981  Unknown   Unknown  Unknown
lapw0_mpi  005A5955  Unknown   Unknown  Unknown
lapw0_mpi  00555BFA  Unknown   Unknown  Unknown
lapw0_mpi  005179E5  Unknown   Unknown  Unknown
lapw0_mpi  005170D2  Unknown   Unknown  Unknown
lapw0_mpi  00524840  Unknown   Unknown  Unknown
lapw0_mpi  0043AEB4  MAIN__   1636  lapw0.F
lapw0_mpi  00405B3C  Unknown   Unknown  Unknown
libc.so.6  002A9707D3FB  Unknown   Unknown  Unknown
lapw0_mpi  00405A6A  Unknown   Unknown  Unknown
p4_error: latest msg from perror: Bad file descriptor
cat: No match.

   stop error
===




My input files are:

 GaAs.in0 ===

TOT   28(5...CA-LDA, 13...PBE-GGA, 11...WC-GGA)
R2V  IFFT  (R2V)
  48  48  482.00min IFFT-parameters, enhancement factor


 GaAs.in0_grr ===

TOT   50(5...CA-LDA, 13...PBE-GGA, 11...WC-GGA)
R2V  IFFT  (R2V)
  48  48  482.00min IFFT-parameters, enhancement factor


 .machines ==

#mpi-para for lapw0, kpoint-para for others.
lapw0:alpha1:4 alpha2:4
1:alpha1:1
1:alpha1:1
1:alpha1:1
1:alpha1:1
1:alpha2:1
1:alpha2:1
1:alpha2:1
1:alpha2:1




My complier, mathlib and make options are:

cc
ifort  (intel 11.0,include mkl)
mpif90 (mpich-1.2.7)
fftw2.1.5

current:FOPT:-FR -mp1 -w -prec_div -pc80 -pad -align -DINTEL_VML -traceback
current:FPOPT:$(FOPT)
current:LDFLAGS:$(FOPT)
-L/home/wxfeng/intel/Compiler/11.0/069/mkl/lib/em64t -pthread
-i-static
current:DPARALLEL:'-DParallel'
current:R_LIBS:-lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lguide
current:RP_LIBS:-lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64
-L/home/wxfeng/apps/fftw2.1.5-mpich/lib -lfftw_mpi -lfftw $(R_LIBS)
current:MPIRUN:mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_



Whar cause this error, and how to handle? Thanks for your help!

Feng.


[Wien] bandgap of narrow gap semiconductor (below 1 eV)

2010-05-18 Thread wanxiang feng
Honorable Professor Blaha,

1. Is that the MBJLDA exchange potential  (PRL, 102, 226401, 2009)
are already implemented in the WIEN2k_09.2 ?  If no, it will be
implemented in the next version? and when the next version will be
released ?

2. Except for MBJLDA, are there some efficient methond to describe
reasonably the bandgap of narrow gap semiconductor (below 1 eV) in
current vesion 09.2 ? Any suggestion will be grateful!


Thanks for your help!

Feng.