[Wien] Problem when running MPI-parallel version of LAPW0

2014-10-22 Thread Rémi Arras

Dear Pr. Blaha, Dear Wien2k users,

We tried to install the last version of Wien2k (14.1) on a supercomputer 
and we are facing some troubles with the MPI parallel version.


1)lapw0 is running correctly in sequential, but crashes systematically 
when the parallel option is activated (independently of the number of 
cores we use):


lapw0 -p(16:08:13) starting parallel lapw0 at lun. sept. 29 16:08:13 

CEST 2014
 .machine0 : 4 processors
Child id1 SIGSEGV
Child id2 SIGSEGV
Child id3 SIGSEGV
Child id0 SIGSEGV
**lapw0 crashed!
0.029u 0.036s 0:50.91 0.0%0+0k 5248+104io 17pf+0w
error: command/eos3/p1229/remir/INSTALLATION_WIEN/14.1/lapw0para -up -c 
lapw0.deffailed

stop error


w2k_dispatch_signal(): received: Segmentation fault
w2k_dispatch_signal(): received: Segmentation fault
Child with myid of1has an error
'Unknown' - SIGSEGV
Child id1 SIGSEGV
application called MPI_Abort(MPI_COMM_WORLD, 0) - process 1
**lapw0 crashed!
cat: No match.0.027u 0.034s 1:33.13 0.0%0+0k 5200+96io 16pf+0w
error: command/eos3/p1229/remir/INSTALLATION_WIEN/14.1/lapw0para -up -c 
lapw0.deffailed



2) lapw2 also crashes sometimes when MPI parallelization is used. 
Sequential or k-parallel runs are ok, and contrary to lapw0, the error 
does not occur for all cases (we did not notice any problem when testing 
the mpi benchmark with lapw1):


w2k_dispatch_signal(): received: Segmentation fault application called 
MPI_Abort(MPI_COMM_WORLD, 768) - process 0


Our system is a Bullx DLC Cluster (LInux Red Hat+ Intel Ivybridge) and 
we use the compiler(+mkl) intel/14.0.2.144 and intelmpi/4.1.3.049.

The batch Scheduler is SLURM.

Here are the settings and the options we used for the installation :

OPTIONS:
current:FOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback
current:FPOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML 
-Dmkl_scalapack -traceback -xAVX
current:FFTW_OPT:-DFFTW3 
-I/users/p1229/remir/INSTALLATION_WIEN/fftw-3.3.4-Intel_MPI/include
current:FFTW_LIBS:-lfftw3_mpi -lfftw3 
-L/users/p1229/remir/INSTALLATION_WIEN/fftw-3.3.4-Intel_MPI/lib

current:LDFLAGS:$(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -pthread
current:DPARALLEL:'-DParallel'
current:R_LIBS:-lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_intel_thread 
-lmkl_core -openmp -lpthread
current:RP_LIBS:-mkl=cluster -lfftw3_mpi -lfftw3 
-L/users/p1229/remir/INSTALLATION_WIEN/fftw-3.3.4-Intel_MPI/lib

current:MPIRUN:mpirun -np _NP_ _EXEC_
current:MKL_TARGET_ARCH:intel64

PARALLEL_OPTIONS:
setenv TASKSET no
setenv USE_REMOTE 1
setenv MPI_REMOTE 1
setenv WIEN_GRANULARITY 1
setenv WIEN_MPIRUN mpirun -np _NP_ _EXEC_

Any suggestions which could help us to solve this problem would be 
greatly appreciated.


Best regards,
Rémi Arras
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Problem when running MPI-parallel version of LAPW0

2014-10-22 Thread Michael Sluydts

Hello Rémi,

While I'm not sure this is the (only) problem, in our setup we also give 
mpirun the machines file:


setenv WIEN_MPIRUN mpirun  -np _NP_ -machinefile _HOSTS_ _EXEC_

which I generate based on a 1 k-point per node setup with the following 
python script:


/wienhybrid
#!/usr/bin/env python
#Machines file generator for WIEN2k
#May 13th 2013
#
#Michael Sluydts
#Center for Molecular Modeling
#Ghent University
from collections import Counter
import subprocess, os
nodefile = subprocess.Popen('echo 
$PBS_NODEFILE',stdout=subprocess.PIPE,shell=True)

nodefile = nodefile.communicate()[0].strip()
nodefile = open(nodefile,'r')

machines = nodefile.readlines()
nodefile.close()

node = ''
corecount=Counter()


#gather cores per nodes
for core in machines:
node = core.split('.')[0]
corecount[node] += 1



#if there are more nodes than k-points we must redistribute the 
remaining cores


#count the irreducible kpoints
IBZ = int(subprocess.Popen('wc -l  ' + os.getcwd().split('/')[-1] + 
'.klist',stdout=subprocess.PIPE,shell=True).communicate()[0])-2


corerank = corecount.most_common()

alloc = Counter()
total = Counter()
nodemap = []
#pick out the largest nodes and redivide the remaining ones by adding 
the largest leftover node to the k-point with least allocated cores


for node,cores in corerank:
if len(alloc)  IBZ:
alloc[node] += cores
total[node] += cores
else:
lowcore = total.most_common()[-1][0]
total[lowcore] += cores
nodemap.append((node,lowcore))

#give lapw0 all cores
machinesfile = 'lapw0: ' + corecount.keys()[0] + ':' + 
str(corecount[corecount.keys()[0]]) + '\n'

#for node in corecount.keys():
#machinesfile += node + ':' + str(corecount[node]) + ' '
#machinesfile += '\n'

#machinesfile = ''
for node in alloc.keys():
#allocate main node
machinesfile += '1:' + node + ':' + str(alloc[node])
#machinesfile += '1:' + node
#for i in range(1,alloc[node]):
#machinesfile += ' ' + node
#distribute leftover nodes
extra = [x for x,y in nodemap if y == node]
for ext in extra:
#machinesfile += ' ' + ext + ':' + str(corecount[ext])
for i in range(1,corecount[ext]):
machinesfile+=' ' + ext
machinesfile += '\n'


#If your nodes do not all have the same specifications you may have to 
change the weights above 1: and the granularity below, if you use a 
residue machine you should remove extrafine and add the residue 
configuration

machinesfile += 'granularity:1\nextrafine:1\n'

#if you have memory issues or a limited bandwidth between nodes try 
uncommenting the following line (can always try it and see if it speeds 
things up)

#machinesfile += 'lapw2 vector split:2\n'

machines = file('.machines','w')
machines.write(machinesfile)
machines.close()



___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Problem when running MPI-parallel version of LAPW0

2014-10-22 Thread Michael Sluydts
Perhaps an important note: the python script is for a Torque PBS queuing 
system (based on $PBS_NODEFILE)


Rémi Arras schreef op 22/10/2014 13:29:

Dear Pr. Blaha, Dear Wien2k users,

We tried to install the last version of Wien2k (14.1) on a 
supercomputer and we are facing some troubles with the MPI parallel 
version.


1)lapw0 is running correctly in sequential, but crashes systematically 
when the parallel option is activated (independently of the number of 
cores we use):


lapw0 -p(16:08:13) starting parallel lapw0 at lun. sept. 29 16:08:13 
CEST 2014

 .machine0 : 4 processors
Child id1 SIGSEGV
Child id2 SIGSEGV
Child id3 SIGSEGV
Child id0 SIGSEGV
**lapw0 crashed!
0.029u 0.036s 0:50.91 0.0%0+0k 5248+104io 17pf+0w
error: command/eos3/p1229/remir/INSTALLATION_WIEN/14.1/lapw0para -up 
-c lapw0.deffailed

stop error

w2k_dispatch_signal(): received: Segmentation fault
w2k_dispatch_signal(): received: Segmentation fault
Child with myid of1has an error
'Unknown' - SIGSEGV
Child id1 SIGSEGV
application called MPI_Abort(MPI_COMM_WORLD, 0) - process 1
**lapw0 crashed!
cat: No match.0.027u 0.034s 1:33.13 0.0%0+0k 5200+96io 16pf+0w
error: command/eos3/p1229/remir/INSTALLATION_WIEN/14.1/lapw0para -up 
-c lapw0.deffailed



2) lapw2 also crashes sometimes when MPI parallelization is used. 
Sequential or k-parallel runs are ok, and contrary to lapw0, the error 
does not occur for all cases (we did not notice any problem when 
testing the mpi benchmark with lapw1):


w2k_dispatch_signal(): received: Segmentation fault application called 
MPI_Abort(MPI_COMM_WORLD, 768) - process 0


Our system is a Bullx DLC Cluster (LInux Red Hat+ Intel Ivybridge) and 
we use the compiler(+mkl) intel/14.0.2.144 and intelmpi/4.1.3.049.

The batch Scheduler is SLURM.

Here are the settings and the options we used for the installation :

OPTIONS:
current:FOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback
current:FPOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML 
-Dmkl_scalapack -traceback -xAVX
current:FFTW_OPT:-DFFTW3 
-I/users/p1229/remir/INSTALLATION_WIEN/fftw-3.3.4-Intel_MPI/include
current:FFTW_LIBS:-lfftw3_mpi -lfftw3 
-L/users/p1229/remir/INSTALLATION_WIEN/fftw-3.3.4-Intel_MPI/lib

current:LDFLAGS:$(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -pthread
current:DPARALLEL:'-DParallel'
current:R_LIBS:-lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_intel_thread 
-lmkl_core -openmp -lpthread
current:RP_LIBS:-mkl=cluster -lfftw3_mpi -lfftw3 
-L/users/p1229/remir/INSTALLATION_WIEN/fftw-3.3.4-Intel_MPI/lib

current:MPIRUN:mpirun -np _NP_ _EXEC_
current:MKL_TARGET_ARCH:intel64

PARALLEL_OPTIONS:
setenv TASKSET no
setenv USE_REMOTE 1
setenv MPI_REMOTE 1
setenv WIEN_GRANULARITY 1
setenv WIEN_MPIRUN mpirun -np _NP_ _EXEC_

Any suggestions which could help us to solve this problem would be 
greatly appreciated.


Best regards,
Rémi Arras


___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Problem when running MPI-parallel version of LAPW0

2014-10-22 Thread Laurence Marks
It is often hard to know exactly what issues are with mpi. Most often it is
due to incorrect combinations of scalapack/blacs in the linking options.

The first think to check is your linking options with
https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/.
What you have does not look exactly right to me, but I have not used your
release.

If that does not work, look in case.dayfile, the log file.

If there is still nothing it is sometimes useful to comment out the line

  CALL W2kinit

in lapw0.F, recompile then just do x lapw0 -p. You sometimes will get
more information although it is not as safe as mpi tasks can hang forever
without it in some cases.

On Wed, Oct 22, 2014 at 6:29 AM, Rémi Arras remi.ar...@cemes.fr wrote:

  Dear Pr. Blaha, Dear Wien2k users,

 We tried to install the last version of Wien2k (14.1) on a supercomputer
 and we are facing some troubles with the MPI parallel version.

 1)  lapw0 is running correctly in sequential, but crashes systematically
 when the parallel option is activated (independently of the number of cores
 we use):

lapw0 -p(16:08:13) starting parallel lapw0 at lun. sept. 29 16:08:13
 CEST 2014
  .machine0 : 4 processors
  Child id   1 SIGSEGV
  Child id   2 SIGSEGV
  Child id   3 SIGSEGV
  Child id   0 SIGSEGV
 **  lapw0 crashed!
 0.029u 0.036s 0:50.91 0.0%  0+0k 5248+104io 17pf+0w
 error: command   /eos3/p1229/remir/INSTALLATION_WIEN/14.1/lapw0para -up -c
 lapw0.def   failed
stop error

 w2k_dispatch_signal(): received: Segmentation fault
 w2k_dispatch_signal(): received: Segmentation fault
  Child with myid of1  has an error
 'Unknown' - SIGSEGV
  Child id   1 SIGSEGV
 application called MPI_Abort(MPI_COMM_WORLD, 0) - process 1
 **  lapw0 crashed!
 cat: No match.0.027u 0.034s 1:33.13 0.0%  0+0k 5200+96io 16pf+0w
 error: command   /eos3/p1229/remir/INSTALLATION_WIEN/14.1/lapw0para -up
 -c lapw0.def   failed


 2) lapw2 also crashes sometimes when MPI parallelization is used.
 Sequential or k-parallel runs are ok, and contrary to lapw0, the error does
 not occur for all cases (we did not notice any problem when testing the
 mpi benchmark with lapw1):

 w2k_dispatch_signal(): received: Segmentation fault application called
 MPI_Abort(MPI_COMM_WORLD, 768) - process 0

 Our system is a Bullx DLC Cluster (LInux Red Hat+ Intel Ivybridge) and we
 use the compiler(+mkl) intel/14.0.2.144 and intelmpi/4.1.3.049.
 The batch Scheduler is SLURM.

 Here are the settings and the options we used for the installation :

 OPTIONS:
 current:FOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback
 current:FPOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML
 -Dmkl_scalapack -traceback -xAVX
 current:FFTW_OPT:-DFFTW3
 -I/users/p1229/remir/INSTALLATION_WIEN/fftw-3.3.4-Intel_MPI/include
 current:FFTW_LIBS:-lfftw3_mpi -lfftw3
 -L/users/p1229/remir/INSTALLATION_WIEN/fftw-3.3.4-Intel_MPI/lib
 current:LDFLAGS:$(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -pthread
 current:DPARALLEL:'-DParallel'
 current:R_LIBS:-lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_intel_thread
 -lmkl_core -openmp -lpthread
 current:RP_LIBS:-mkl=cluster -lfftw3_mpi -lfftw3
 -L/users/p1229/remir/INSTALLATION_WIEN/fftw-3.3.4-Intel_MPI/lib
 current:MPIRUN:mpirun -np _NP_ _EXEC_
 current:MKL_TARGET_ARCH:intel64

 PARALLEL_OPTIONS:
 setenv TASKSET no
 setenv USE_REMOTE 1
 setenv MPI_REMOTE 1
 setenv WIEN_GRANULARITY 1
 setenv WIEN_MPIRUN mpirun -np _NP_ _EXEC_

 Any suggestions which could help us to solve this problem would be greatly
 appreciated.

 Best regards,
 Rémi Arras




-- 
Professor Laurence Marks
Department of Materials Science and Engineering
Northwestern University
www.numis.northwestern.edu
Corrosion in 4D: MURI4D.numis.northwestern.edu
Co-Editor, Acta Cryst A
Research is to see what everybody else has seen, and to think what nobody
else has thought
Albert Szent-Gyorgi
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Problem when running MPI-parallel version of LAPW0

2014-10-22 Thread Peter Blaha

Usually the crucial point for lapw0  is the fftw3-library.

I noticed you have fftw-3.3.4, which I never tested. Since fftw is 
incompatible between fftw2 and 3, maybe they have done something again ...


Besides that, I assume you have installed fftw using the same ifor and 
mpi versions ...




On 10/22/2014 01:29 PM, Rémi Arras wrote:

Dear Pr. Blaha, Dear Wien2k users,

We tried to install the last version of Wien2k (14.1) on a supercomputer
and we are facing some troubles with the MPI parallel version.

1)lapw0 is running correctly in sequential, but crashes systematically
when the parallel option is activated (independently of the number of
cores we use):


lapw0 -p(16:08:13) starting parallel lapw0 at lun. sept. 29 16:08:13

CEST 2014
 .machine0 : 4 processors
Child id1 SIGSEGV
Child id2 SIGSEGV
Child id3 SIGSEGV
Child id0 SIGSEGV
**lapw0 crashed!
0.029u 0.036s 0:50.91 0.0%0+0k 5248+104io 17pf+0w
error: command/eos3/p1229/remir/INSTALLATION_WIEN/14.1/lapw0para -up -c
lapw0.deffailed

stop error


w2k_dispatch_signal(): received: Segmentation fault
w2k_dispatch_signal(): received: Segmentation fault
Child with myid of1has an error
'Unknown' - SIGSEGV
Child id1 SIGSEGV
application called MPI_Abort(MPI_COMM_WORLD, 0) - process 1
**lapw0 crashed!
cat: No match.0.027u 0.034s 1:33.13 0.0%0+0k 5200+96io 16pf+0w
error: command/eos3/p1229/remir/INSTALLATION_WIEN/14.1/lapw0para -up -c
lapw0.deffailed


2) lapw2 also crashes sometimes when MPI parallelization is used.
Sequential or k-parallel runs are ok, and contrary to lapw0, the error
does not occur for all cases (we did not notice any problem when testing
the mpi benchmark with lapw1):

w2k_dispatch_signal(): received: Segmentation fault application called
MPI_Abort(MPI_COMM_WORLD, 768) - process 0

Our system is a Bullx DLC Cluster (LInux Red Hat+ Intel Ivybridge) and
we use the compiler(+mkl) intel/14.0.2.144 and intelmpi/4.1.3.049.
The batch Scheduler is SLURM.

Here are the settings and the options we used for the installation :

OPTIONS:
current:FOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback
current:FPOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML
-Dmkl_scalapack -traceback -xAVX
current:FFTW_OPT:-DFFTW3
-I/users/p1229/remir/INSTALLATION_WIEN/fftw-3.3.4-Intel_MPI/include
current:FFTW_LIBS:-lfftw3_mpi -lfftw3
-L/users/p1229/remir/INSTALLATION_WIEN/fftw-3.3.4-Intel_MPI/lib
current:LDFLAGS:$(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -pthread
current:DPARALLEL:'-DParallel'
current:R_LIBS:-lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_intel_thread
-lmkl_core -openmp -lpthread
current:RP_LIBS:-mkl=cluster -lfftw3_mpi -lfftw3
-L/users/p1229/remir/INSTALLATION_WIEN/fftw-3.3.4-Intel_MPI/lib
current:MPIRUN:mpirun -np _NP_ _EXEC_
current:MKL_TARGET_ARCH:intel64

PARALLEL_OPTIONS:
setenv TASKSET no
setenv USE_REMOTE 1
setenv MPI_REMOTE 1
setenv WIEN_GRANULARITY 1
setenv WIEN_MPIRUN mpirun -np _NP_ _EXEC_

Any suggestions which could help us to solve this problem would be
greatly appreciated.

Best regards,
Rémi Arras


___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html



--

  P.Blaha
--
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-165300 FAX: +43-1-58801-165982
Email: bl...@theochem.tuwien.ac.atWIEN2k: http://www.wien2k.at
WWW:   http://www.imc.tuwien.ac.at/staff/tc_group_e.php
--
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Wien Digest, Vol 107, Issue 6

2014-10-22 Thread Salman Zarrini

+
Dear Victor,

Thank you for your answer, I know the concepts one by one (at least I  
think I know), however, my question is still about their equalization,  
for example, when we run an Anti ferromagnetic calculation in Wien2k  
for a bulk system, which one of the Closed shell, open shell,  
Restricted or unrestricted configuration would be really applied in  
this case?
For example as I mentioned: A non-spin polarized calculation in  
Wien2k(run_lapw) apparently looks like a closed shell system which  
usually is used for nonmagnetic or Diamagnetic materials.
So, again what is important for me is approximate equalization of  
these two groups of definition.
And it is always easy to understand to see how the orbital are filled  
out for example in O2 molecule (in the gas phase or as a impurity in  
large system)and predict the magnetic or spin ordering behavior of  
O2 molecule, but it would be a bit challenging when we want to  
explain for example Anti ferromagnetic behavior of NiO or  
ferromagnetic behavior of Gd5Ge4, but not by plotting DOS or band  
structure but by presenting the molecule orbitals exactly like what is  
doing for O2 molecule.


Cheers,

Salman
+



Subject: [Wien] Bridging from Physics to Chemistry
+++
Dear Wien2k users,

I get always confused while bridging from Physics to Chemistry in
explaining spin and Magnetism.
So, I would be highly appreciated if anybody kindly equalized (if it
is possible)in DFT the concepts like Nonmagnetic, Paramagnetic,
Ferromagnetic, Anti-ferromagnetic and Ferrimagnetic in one hand
and Closed shell, Open shell, Spin restricted and Spin
unrestricted configurations in the another hand, specially in the
case of infinite system like an usual bulk (magnetic or nonmagnetic)
which is possible to be easily treated in a plane wave code like Wien2k.

To start, I can just say:  doing a non-spin polarized calculation in
for example Wien2k (run_lapw) equals to a Closed shell calculation.

And also, for me a Ferrimagnetic looks like a Spin unrestricted
configuration ... .

Best regards,

Salman Zarrini
+++




You are asking an impressive list of things and it is not easy
to answer them. You would need a complete master course for this.

1) Let me start from the electronic structure concept of closed and open
shell. A closed shell corresponds to have all orbital levels empty or
containing a complete collection of electrons. So a ns(2), np(6), nd(10)
or nf(14) are closed shells.  An open shell corresponds to have nl(N),
for 0N2*(2*l+1). Remember that in period n the nl orbitals are the
valence ones and the ones involved in chemical bonding.

Closed shells are electronic groups that belong to the fully symmetric
irreducible representation (irrep) for the local symetry group. So they
do not provide many energy levels to your system.

So, when you examine the optical properties of a Cr(+3) impurity in a
Al2O3 corundum crystal is the energy levels of the Cr(+3) open shell
the ones that produce the interesting optical properties. The Al(+3) and
O(-2) ions are closed shells and the provide the chemical ambient where
the 3d impurities do the nice things.

Similarly in the second and third transition metal atoms or in the nf
rare earths. All of them tend to produce rich open shell groups.

2) As for the cooperative magnetism of ferro, ferri, etc I advice you
to explore some good text on the subject: Tipler, Kittel, Ashcroft-Mermin.
The diffrent types of magnetism correspond to different couplings of the
m_s spins in neighbor unit cells of the crystal.

3) The spin restricted and unrestricted SCF techniques correspond
to force the alfa (m_s=+1/2) and beta (m_s=-1/2) electrons having
the same spacial description (restricted) or let the two groups
occupy different regions in space, i.e. different R_{nl}(r) and
R_{nl}^prime(r) orbitals. The unrestricted techniques are very important
as a first step in solving the correlation energy problem.

If you have been lost in this lengthy post don't worry. I told you that
your question was not easy.

Regards,
 V?ctor Lua?a

--
   \|/a  After years of working on a problem the genius shout:
  |^.^| what an idiot I am ... the solution is trivial!'
+-!OO--\_/--OO!--+---
!Dr.V?ctor Lua?a   !
! Departamento de Qu?mica F?sica y Anal?tica   !
! Universidad de Oviedo, 33006-Oviedo, Spain   !
! e-mail:   vic...@fluor.quimica.uniovi.es !
! phone: +34-985-103491  fax: +34-985-103125   !
+--+
 GroupPage : http://azufre.quimica.uniovi.es/  (being reworked)

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  

[Wien] Bridging from Physics to Chemistry

2014-10-22 Thread Salman Zarrini

+
Dear Victor,

Thank you for your answer, I know the concepts one by one (at least I  
think I know), however, my question is still about their equalization,  
for example, when we run an Anti ferromagnetic calculation in Wien2k  
for a bulk system, which one of the Closed shell, open shell,  
Restricted or unrestricted configuration would be really applied in  
this case? For example as I mentioned: A non-spin polarized  
calculation in Wien2k(run_lapw) apparently looks like a closed shell  
system which usually is used for nonmagnetic or Diamagnetic materials.
So, again what is important for me is approximate equalization of  
these two groups of definition. And it is always easy to understand to  
see how the orbital are filled out for example in O2 molecule (in  
the gas phase or as a impurity in large system)and predict the  
magnetic or spin ordering behavior of O2 molecule, but it would be a  
bit challenging when we want to explain for example Anti ferromagnetic  
behavior of NiO or ferromagnetic behavior of Gd5Ge4, but not by  
plotting DOS or band structure but by presenting the molecule orbitals  
exactly like what is doing for O2 molecule.


Cheers,

Salman Zarrini
+


[Hide Quoted Text]
Subject: [Wien] Bridging from Physics to Chemistry
+++
Dear Wien2k users,

I get always confused while bridging from Physics to Chemistry in
explaining spin and Magnetism.
So, I would be highly appreciated if anybody kindly equalized (if it
is possible)in DFT the concepts like Nonmagnetic, Paramagnetic,
Ferromagnetic, Anti-ferromagnetic and Ferrimagnetic in one hand
and Closed shell, Open shell, Spin restricted and Spin
unrestricted configurations in the another hand, specially in the
case of infinite system like an usual bulk (magnetic or nonmagnetic)
which is possible to be easily treated in a plane wave code like Wien2k.

To start, I can just say:  doing a non-spin polarized calculation in
for example Wien2k (run_lapw) equals to a Closed shell calculation.

And also, for me a Ferrimagnetic looks like a Spin unrestricted
configuration ... .

Best regards,

Salman Zarrini
+++

You are asking an impressive list of things and it is not easy
to answer them. You would need a complete master course for this.

1) Let me start from the electronic structure concept of closed and open
shell. A closed shell corresponds to have all orbital levels empty or
containing a complete collection of electrons. So a ns(2), np(6), nd(10)
or nf(14) are closed shells.  An open shell corresponds to have nl(N),
for 0N2*(2*l+1). Remember that in period n the nl orbitals are the
valence ones and the ones involved in chemical bonding.

Closed shells are electronic groups that belong to the fully symmetric
irreducible representation (irrep) for the local symetry group. So they
do not provide many energy levels to your system.

So, when you examine the optical properties of a Cr(+3) impurity in a
Al2O3 corundum crystal is the energy levels of the Cr(+3) open shell
the ones that produce the interesting optical properties. The Al(+3) and
O(-2) ions are closed shells and the provide the chemical ambient where
the 3d impurities do the nice things.

Similarly in the second and third transition metal atoms or in the nf
rare earths. All of them tend to produce rich open shell groups.

2) As for the cooperative magnetism of ferro, ferri, etc I advice you
to explore some good text on the subject: Tipler, Kittel, Ashcroft-Mermin.
The diffrent types of magnetism correspond to different couplings of the
m_s spins in neighbor unit cells of the crystal.

3) The spin restricted and unrestricted SCF techniques correspond
to force the alfa (m_s=+1/2) and beta (m_s=-1/2) electrons having
the same spacial description (restricted) or let the two groups
occupy different regions in space, i.e. different R_{nl}(r) and
R_{nl}^prime(r) orbitals. The unrestricted techniques are very important
as a first step in solving the correlation energy problem.

If you have been lost in this lengthy post don't worry. I told you that
your question was not easy.

Regards,
 V?ctor Lua?a

--
   \|/a  After years of working on a problem the genius shout:
  |^.^| what an idiot I am ... the solution is trivial!'
+-!OO--\_/--OO!--+---
!Dr.V?ctor Lua?a   !
! Departamento de Qu?mica F?sica y Anal?tica   !
! Universidad de Oviedo, 33006-Oviedo, Spain   !
! e-mail:   vic...@fluor.quimica.uniovi.es !
! phone: +34-985-103491  fax: +34-985-103125   !
+--+
GroupPage : http://azufre.quimica.uniovi.es/  (being reworked)
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the 

Re: [Wien] Wien Digest, Vol 107, Issue 6

2014-10-22 Thread Víctor Luaña Cabal
On Wed, Oct 22, 2014 at 02:31:15PM +0200, Salman Zarrini wrote:
 +
 Dear Victor,

 Thank you for your answer, I know the concepts one by one (at least I  
 think I know), however, my question is still about their equalization,  
 for example, when we run an Anti ferromagnetic calculation in Wien2k  
 for a bulk system, which one of the Closed shell, open shell,  

I guess that you are a chemist (may I ask your main subject?) trying to follow
a road to solid state physics. Wellcome to the road.

May be you will find useful the book by Roald Hoffmann that tries to introduce
chemists to solid state:

Roald Hoffmann,
Solid State Physics: An Introduction, 2nd ed
http://eu.wiley.com/WileyCDA/WileyTitle/productCd-3527412824.html

There is a book by Dronskowski devoted to computational methods:

Computational Chemistry of Solid State Materials: A Guide for Materials
Scientists, Chemists, Physicists and others
by Richard Dronskowski
http://eu.wiley.com/WileyCDA/Section/id-WILEYEUROPE2_SEARCH_RESULT.html?query=Dronskowski

Regards,
 Dr. Víctor Luaña

--
   \|/a  After years of working on a problem the genius shout:
  |^.^| what an idiot I am ... the solution is trivial!'
+-!OO--\_/--OO!--+---
!Dr.Víctor Luaña   !
! Departamento de Química Física y Analítica   !
! Universidad de Oviedo, 33006-Oviedo, Spain   !
! e-mail:   vic...@fluor.quimica.uniovi.es !
! phone: +34-985-103491  fax: +34-985-103125   !
+--+
 GroupPage : http://azufre.quimica.uniovi.es/  (being reworked)
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Wien Digest, Vol 107, Issue 6 (Bridging from Physics to Chemistry)

2014-10-22 Thread Salman Zarrini

+++
My background is Physics but working in a Chemistry department where I  
have to always switch the language from Physics to Chemistry when I  
wanna discuss something which makes life a bit tough for me.
Any way, thank you for the links, I think I have the one from Roald  
Hoffmann in my archive.


Cheers,

Salman Zarrini
+++

I guess that you are a chemist (may I ask your main subject?) trying  
to follow

a road to solid state physics. Wellcome to the road.

May be you will find useful the book by Roald Hoffmann that tries to  
introduce

chemists to solid state:

Roald Hoffmann,
Solid State Physics: An Introduction, 2nd ed
http://eu.wiley.com/WileyCDA/WileyTitle/productCd-3527412824.html

There is a book by Dronskowski devoted to computational methods:

Computational Chemistry of Solid State Materials: A Guide for Materials
Scientists, Chemists, Physicists and others
by Richard Dronskowski
http://eu.wiley.com/WileyCDA/Section/id-WILEYEUROPE2_SEARCH_RESULT.html?query=Dronskowski

Regards,
 Dr. Víctor Luaña

--
   \|/a  After years of working on a problem the genius shout:
  |^.^| what an idiot I am ... the solution is trivial!'
+-!OO--\_/--OO!--+---
!Dr.Víctor Luaña   !
! Departamento de Química Física y Analítica   !
! Universidad de Oviedo, 33006-Oviedo, Spain   !
! e-mail:   vic...@fluor.quimica.uniovi.es !
! phone: +34-985-103491  fax: +34-985-103125   !
+--+
 GroupPage : http://azufre.quimica.uniovi.es/  (being reworked)
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:   
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html






___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Bridging from Physics to Chemistry

2014-10-22 Thread Peter Blaha
It is a bit beyond the topics of the mailing list, but I still will try 
to contribute to your understanding hoping that I'm not getting 
oversimplifying:


The terms closed and open shell in atoms/molecules usually means 
that you have only paired electrons (each atomic/molecular orbital is 
occupied by a spin-up AND dn electron), or also unpaired electrons.
From this definition it is also clear that any atom/molecule with an 
odd number of electrons will be open-shell, and in an open-shell systems 
there is a net spin-magnetic moment since the number of up/dn electrons 
is (usually) different).
In a bigger molecule you could have several unpaired electrons in 
different MOs, but the be arranged in different ways in spin-up or dn, 
and one usually classifies them by specifying the spin-multiplicity

(singlet, duplet,triplet,...)

And last but not least, one can make an approximation restrict spin-up 
and dn-orbitals to be the same or not (restricted/unrestricted).


In solids things are a bit different:

If all electrons are paired and we have an insulator/semiconductor, we 
talk about a diamagnet (=closed shell) and it implies again that the 
number of electrons is even.
However, in contrast to atoms/molecules, we can have a paramagnetic 
METAL, which can have an odd number of electrons and still the up and 
dn-electrons are equal. This is a consequence of the large (infinite) 
number of atoms in a 3D solid and the resulting delocalization of the 
electronic states, so that ONE atom may have only a small fraction of an 
electron in a particular orbital (better a Bloch-state).
So the Na atom is a open shell system with 1 unpaired electron, while 
metallic Na is a paramagnet (and we do run_lapw, i.e. forcing equal 
number and orbitals for up and dn spin).


Also in a solid you can have unpaired electrons (take the metals Fe or 
Cr), but then these open shell solutions may differ in the way they 
have long-range order (something that does of course not exist in 
molecules). If the spins on all atoms point into the same direction, we 
speak about a ferromagnet (Fe), but they could also be antiferromagnetic 
(spin-up on one atom, spin-dn on the next,...) or even more complicated 
(spin-spirals, non-collinear (or canted), 
Cr you can consider as AFM (although, actually it has a long 
spin-spiral...).
So for AFM-Cr we do a spin-unrestricted calculation with a total 
singlet (zero) spin/unit cell), while for ferromagnetic Fe the total 
spin is non-zero (note, Fe has a NON-INTEGER spin-moment of 2.2 uB, 
something which does (to my knowledge) not exist in a finite system.


And last but not least, an Antiferromagnet in MO-language is a system 
where there are more occupied orbitals of spin-up on atom 1; but more of 
spin-dn on atom 2.
Or if you like: When we do the O2 molecule in a periodic code using a 
big supercell, the triplet O2 molecular state is a ferromagnet, while 
the singlet state would be an antiferromagnet.




Thank you for your answer, I know the concepts one by one (at least I
think I know), however, my question is still about their equalization,
for example, when we run an Anti ferromagnetic calculation in Wien2k
for a bulk system, which one of the Closed shell, open shell,
Restricted or unrestricted configuration would be really applied in
this case? For example as I mentioned: A non-spin polarized calculation
in Wien2k(run_lapw) apparently looks like a closed shell system which
usually is used for nonmagnetic or Diamagnetic materials.
So, again what is important for me is approximate equalization of these
two groups of definition. And it is always easy to understand to see how
the orbital are filled out for example in O2 molecule (in the gas
phase or as a impurity in large system)and predict the magnetic or spin
ordering behavior of O2 molecule, but it would be a bit challenging
when we want to explain for example Anti ferromagnetic behavior of NiO
or ferromagnetic behavior of Gd5Ge4, but not by plotting DOS or band
structure but by presenting the molecule orbitals exactly like what is
doing for O2 molecule.


--

  P.Blaha
--
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-165300 FAX: +43-1-58801-165982
Email: bl...@theochem.tuwien.ac.atWIEN2k: http://www.wien2k.at
WWW:   http://www.imc.tuwien.ac.at/staff/tc_group_e.php
--
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Bridging from Physics to Chemistry

2014-10-22 Thread Salman Zarrini


Dear Prof. Blaha,

Thank you for your prompt and nice answer.
So can you confirm me please that, in comparison of a paramagnetic  
METAL and a antiferromagnetic solid I would say that paramagnetic  
METAL is a kind of a closed shell system while a antiferromagnetic  
solid is a open shell but both of them have zero spin/unit  
cell,singlet state or so to speak, a paramagnetic METAL is a  
singlet closed shell system but antiferromagnetic is a singlet  
open shell system.


I have to more questions which your answer would be highly appreciated:

1. What happen in a bulk case that we start a scf spin-polarized  
calculation (runsp_lapw) where all the spin for example are aligned up  
in initialization (instgen_lapw - case.inst) but at the end no spin  
magnetic moment or a nonmagnetic system would be harvested. Means we  
started from a Open shell system (in this case triplet or even more)  
but the SCF finished by a closed shell system, is it meaningful such  
changes?


2. Do you think does it make sense talking about paramagnetic behavior  
or call a system paramagne in the DFT calculation? as in one hand the  
zero kelvin is the temperature considered in all the DFT level  
calculations and codes, and in another hand we know that  
paramagnetic behavior shows up at higher than specific temperatures  
like curie and neel in solid states


Best regards,

Salman Zarrini



Quoting Peter Blaha pbl...@theochem.tuwien.ac.at:

It is a bit beyond the topics of the mailing list, but I still will  
try to contribute to your understanding hoping that I'm not getting  
oversimplifying:


The terms closed and open shell in atoms/molecules usually means  
that you have only paired electrons (each atomic/molecular orbital  
is occupied by a spin-up AND dn electron), or also unpaired electrons.
From this definition it is also clear that any atom/molecule with an  
odd number of electrons will be open-shell, and in an open-shell  
systems there is a net spin-magnetic moment since the number of  
up/dn electrons is (usually) different).
In a bigger molecule you could have several unpaired electrons in  
different MOs, but the be arranged in different ways in spin-up or  
dn, and one usually classifies them by specifying the  
spin-multiplicity

(singlet, duplet,triplet,...)

And last but not least, one can make an approximation restrict  
spin-up and dn-orbitals to be the same or not  
(restricted/unrestricted).


In solids things are a bit different:

If all electrons are paired and we have an insulator/semiconductor,  
we talk about a diamagnet (=closed shell) and it implies again  
that the number of electrons is even.
However, in contrast to atoms/molecules, we can have a paramagnetic  
METAL, which can have an odd number of electrons and still the up  
and dn-electrons are equal. This is a consequence of the large  
(infinite) number of atoms in a 3D solid and the resulting  
delocalization of the electronic states, so that ONE atom may have  
only a small fraction of an electron in a particular orbital  
(better a Bloch-state).
So the Na atom is a open shell system with 1 unpaired electron,  
while metallic Na is a paramagnet (and we do run_lapw, i.e. forcing  
equal number and orbitals for up and dn spin).


Also in a solid you can have unpaired electrons (take the metals Fe  
or Cr), but then these open shell solutions may differ in the way  
they have long-range order (something that does of course not exist  
in molecules). If the spins on all atoms point into the same  
direction, we speak about a ferromagnet (Fe), but they could also be  
antiferromagnetic (spin-up on one atom, spin-dn on the next,...) or  
even more complicated (spin-spirals, non-collinear (or canted), 

Cr you can consider as AFM (although, actually it has a long spin-spiral...).
So for AFM-Cr we do a spin-unrestricted calculation with a total  
singlet (zero) spin/unit cell), while for ferromagnetic Fe the total  
spin is non-zero (note, Fe has a NON-INTEGER spin-moment of 2.2 uB,  
something which does (to my knowledge) not exist in a finite system.


And last but not least, an Antiferromagnet in MO-language is a  
system where there are more occupied orbitals of spin-up on atom 1;  
but more of spin-dn on atom 2.
Or if you like: When we do the O2 molecule in a periodic code using  
a big supercell, the triplet O2 molecular state is a ferromagnet,  
while the singlet state would be an antiferromagnet.




Thank you for your answer, I know the concepts one by one (at least I
think I know), however, my question is still about their equalization,
for example, when we run an Anti ferromagnetic calculation in Wien2k
for a bulk system, which one of the Closed shell, open shell,
Restricted or unrestricted configuration would be really applied in
this case? For example as I mentioned: A non-spin polarized calculation
in