[Wien] How to print QTLs for f subshell

2012-07-27 Thread Jonathan Solomon
To WIEN2k users and developers:

I would like to print QTLs for all seven orbitals contained in the f
subshell. I set ISPLIT=15 in the case.struct file but when I run init_lapw
it reverts to ISPLIT=8 and thus only prints up to the d orbitals. It does
not work if I manually change it back to 15 before running the calculation.

Thank you,

Jonathan Solomon
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20120727/00f553d8/attachment.htm>


[Wien] EFG contributions (valence/lattice) in WIEN2K

2012-07-27 Thread Dimitri Bogdanovski
Thank you very much for your assistance, Stefaan - the problem is solved.

Regards,
Dimitri

-- 
Dimitri Bogdanovski

Moderne Strukturanalytik komplexer chemischer Systeme (AK Haarmann) / Junior 
Research Group for Modern Structure Analysis 

Institut f?r Anorganische Chemie  der RWTH Aachen / Institute of Inorganic 
Chemistry, RWTH Aachen University

AC-Nebengeb?ude (Geb?ude 2010) / AC secondary building (building 2010)
Raum/room N112

Landoltweg 1
D-52074 Aachen


On 25/07/2012 13:05, Stefaan Cottenier wrote:
>It looks like you ran your regular scf-cycle with k-point parallelization.
>
>In that case, also the subsequent lapw2 run should be done in parallel
>(x lapw2 -p , and including all other options of lapw2 as in the regular
>run : do 'grep lapw2 :log' to see what these other options were).
>
>Furthermore, the vector files from the regular run should still be
>available/accessible. Either start your separate lapw2 run from the same
>host if you run on a personal cluster, or add the lapw2 run immediately
>after the regular run (i.e. in the same job script) if you run on a
>centralized cluster with scheduler.
>
>Changing TOT to EFG is not necessary anymore, you can simply add the
>-efg switch to lapw2: 'x lapw2 -p -efg +otheroptions'
>
>Stefaan
>
>
>On 25/07/2012 12:57, Dimitri Bogdanovski wrote:
>> Dear WIEN community,
>>
>> I have a problem calculating the lattice/valence contributions of the
>> electric field gradient.
>>
>> According to a short guide by Katrin Koch and Stefaan Cottenier, a
>> normal initialization of the calculation and a SCF cycle is to be
>> performed (no problems so far). The output of the EFG is then given
>> as the total V_ZZ contribution.
>>
>> Apparently, one can obtain the lattice and valence contributions to
>> V_ZZ^tot by using the switch "EFG" in the case.in2 file and then
>> running lapw2 (single program). The contributions are supposedly
>> given in the case.output2 file.
>>
>> However, when I tried this, lapw2 was aborted with the following
>> error message:
>>
>> "Error in LAPW2 'l2main' - error reading parallel vectors"
>>
>> The "normal", i.e. full SCF cycle with the TOT switch in case.in2 was
>> carried out without any problems, this just happens when I try to run
>> lapw2 (several cases).
>>
>> Support would be much appreciated.
>>
>> Regards, Dimitri Bogdanovski
>>
>>
>>



[Wien] SRC_lapw0/compile.msg in WIEN2k_12.1

2012-07-27 Thread θΆŠι‡Žι›…θ‡³
Dear Pascal,

I replace $(MKL_TARGET_ARCH) with ia32 or intel64.

 L   Linker Flags: $(FOPT) -L$(MKLROOT)/lib/intel64 -pthread
for my case.

This can be related to MKL10.3.

http://software.intel.com/en-us/articles/intel-mkl-103-release-notes/

?mklvars.* script no longer set $FPATH in environment and internal
variable MKL_TARGET_ARCH will not be exported. This change will not
impact users as the Intel compiler no longer require the $FPATH
variable

-- 
Masanori Koshino
AIST, JAPAN
m-koshino at aist.go.jp


[Wien] Error while parallel run

2012-07-27 Thread Peter Blaha
How should I know the correct name of your computer ???

When you login to the machine, what are you using ??? Most likely, this will be 
the correct name.

If it is a shared memory machine you should use the same name for all
processes.

Am 26.07.2012 19:45, schrieb alpa dashora:
> Dear Prof. Blaha, Prof. Marks and All Wien2k users,
>
> Thank you very much for reply. I have given the more detail of my system as 
> you required:
>
> 1. What kind of system do you have ??
>
>  We have HP ProLiant DL380 G7 (8 servers) with 2 processors each. So we 
> have 16 processors and the total memory is shared by all the processors.
>
> 2. sh ???   What did you specify in siteconfig when configuring the parallel 
> environment ??? shared memory or non-shared memory  ??
>  During the site configuration, I have used shared memory architecture.
>
> 3. *are your nodes really called "cpu1", ...*
> *
> *
> I have used the 'top' command on terminal, it gives the performance of 
> all the processors. It gives the name of each processor as cpu1, cpu2,  
> cpu3, so I
> have taken it as such.
>
> Please suggest me the correct .machines file or any other solution to solve 
> this problem.
>
> With kind regards,
>
> On Thu, Jul 26, 2012 at 2:25 PM, Peter Blaha  > wrote:
>
> You seem to have several errors in your basic installation:
>
>
>  > setenv USE_REMOTE 0
>  > setenv MPI_REMOTE 0
>
>  > [arya:01254] filem:rsh: copy(): Error: File type unknown
>
> rsh ???   What did you specify in siteconfig when configuring the 
> parallel environment ???
>
> shared memory or non-shared memory  ??
> ssh  or  rsh  ??(most likely rsh will not work on most systems)
>
> What kind of system do you have ??
>
> a) Is it ONE computer with many cores (typically some SGI or IBM-power 
> machines, or a SINGLE Computer
>  with 2-4 Xeon-quadcore processors), or
> b) a "cluster" (connected via Infiniband) of several (Xeon multicore) 
> nodes
>
> Only a) is a "shared memory machine" and you can set USE_REMOTE to 0
>
> Another problem might be your   .machines file:
> are your nodes really called "cpu1", ...
>
> This implies more or less that you have a cluster of single-core machines 
> ???
>
> My guess is that you have a 16 core shared memory machine ???
> In this case, the  .machines file must always contain the same "correct" 
> machine name
> (or maybe "localhost"), but not cpu1,2
>
>
> Am 26.07.2012 10 :17, schrieb alpa dashora:
>
> Dear Wien2k Users and Prof. Marks,
>
> Thankyou very much for your reply. I am giving more information.
> Wien2k Version: Wien2k_11.1 on a 8 processor server each has two 
> nodes.
> mkl library: 10.0.1.014
> openmpi: 1.3
> fftw: 2.1.5
>
> My OPTION file is as follows:
>
> current:FOPT:-FR -O3 -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML 
> -traceback -l/opt/openmpi/include
> current:FPOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -traceback
> current:LDFLAGS:-L/root/__WIEN2k_11/SRC_lib 
> -L/opt/intel/cmkl/10.0.1.014/__lib/em64t  
>  -lmkl_em64t
> -lmkl_blacs_openmpi_lp64 -lmkl_solver -lguide -lpthread
> -i-static
> current:DPARALLEL:'-DParallel'
> current:R_LIBS:-L/opt/intel/__cmkl/10.0.1.014/lib/em64t 
>   
> -lmkl_scalapack_lp64
> -lmkl_solver_lp64_sequential -Wl,--start-group -lmkl_intel_lp64
>
> -lmkl_sequential -lmkl_core -lmkl_blacs_openmpi_lp64 -Wl,--end-group 
> -lpthread -lm -L/opt/openmpi/1.3/lib/ -lmpi_f90 -lmpi_f77 -lmpi -lopen-rte 
> -lopen-pal -ldl
> -Wl,--export-dynamic -lnsl -lutil -limf -L/opt/fftw-2.1.5/lib/lib/ 
> -lfftw_mpi -lrfftw_mpi -lfftw -lrfftw
> current:RP_LIBS:-L/opt/intel/__cmkl/10.0.1.014/lib/em64t 
>   
> -lmkl_scalapack_lp64
> -lmkl_solver_lp64_sequential -Wl,--start-group -lmkl_intel_lp64
>
> -lmkl_sequential -lmkl_core -lmkl_blacs_openmpi_lp64 -Wl,--end-group 
> -lpthread -lm -L/opt/openmpi/1.3/lib/ -lmpi_f90 -lmpi_f77 -lmpi -lopen-rte 
> -lopen-pal -ldl
> -Wl,--export-dynamic -lnsl -lutil -limf -L/opt/fftw-2.1.5/lib/lib/ 
> -lfftw_mpi -lrfftw_mpi -lfftw -lrfftw
> current:MPIRUN:/opt/openmpi/1.__3/bin/mpirun -v -n _NP_ _EXEC_
>
> My parallel_option file is as follows:
>
> setenv USE_REMOTE 0
> setenv MPI_REMOTE 0
> setenv WIEN_GRANULARITY 1
> setenv WIEN_MPIRUN "/opt/openmpi/1.3/bin/mpirun -v -n _NP_ 
> -machinefile _HOSTS_ _EXEC_"
>
> On the compilation no error message was received and all the 
> executable files are generated. I have edited parallel_option file, so now 
> the error message is changed
> and it is as
>