Re: [Pw_forum] Error in davcio running head.x

2014-10-21 Thread Paolo Giannozzi
On Tue, 2014-10-21 at 18:22 +0200, Valentina Cantatore wrote:


> The program stops with the error:
> "Error in routine davcio (25): error while writing from file
> "*/./out/_ph0/*.prd38".
> 
> I had a similar problem with a calculation on a molecular system and
> you suggested me to reduce the cutoffs. It worked in that case.

I seriously doubt that it worked because you reduced the cutoff. 
The I/O should write no mater what the cutoff is, as long as there 
is enough disk space and file size does not exceed the allowed
maximum length

Paolo

> Now I have significantly reduce them but I have the same problem. I
> use Quantum Espresso 5.1 and I work with 64 CPUs.
> 
> 
> I attach the input files for the pwx calculation and for the head one.
> 
> 
> Any help will be really appreciated.
> 
> 
> Thank you very much
> 
> 
> Valentina Cantatore
> PostDoc@Università del Piemonte Orientale, Alessandria 
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

-- 
Paolo Giannozzi, Dept. Chemistry, 
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222 

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] Binding energy of Ti on Graphene oxide sheet

2014-10-21 Thread rajiv kumar
I was going through the paper "wang et. al. ACS nano Vol3, No.10,
2995-3000, (2009)" where the binding energy of Titanium on Graphene oxide
(figure:3) was done using below mentioned formula:
Eb (Ti) = E(GO) + E(Ti) - E(Ti/GO)

Here I have a confusion, as for E(GO) some of the oxygen atoms are double
bonded with the Graphene sheet, but in E(Ti/GO) few oxygen breaks one of
the bond from the sheet and get attached to Ti atom. So here the final
structure of Ti-with-GO is completely different. So my question is that is
the way of calculating the Binding energy of Ti on Grapehene oxide is
correct?  If yes then can anyone explain this?

With best regards,
Rajiv Kumar Chouhan,
Post-Doctoral Fellow,
Boise State University,
Idhao-83725
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Convergence of Magnetization in Graphene Monovacancy Supercell

2014-10-21 Thread BARRETEAU Cyrille
Dear Haricharan Padmanabhan

The magnetization asociated with a vacancy is known to converge very slowly. As 
you will see in the following detailed study: PHYSICAL REVIEW B 85, 245443 
(2012)
the 6x6 supercell in in fact very small... if you want to get your 
magnetization converged.

2d systems can have some advantages but also some serious drawbacks due to the 
very slow convergence of certain quantities related to the bi-dimensionality. 
This is also why tight-binding is very popular in graphene :-)

good luck

Cyrille



--
Cyrille Barreteau
CEA Saclay, IRAMIS, SPEC Bat. 771
91191 Gif sur Yvette Cedex, FRANCE

DTU Nanotech
Ørsteds Plads, building 345E
DK-2800 Kgs. Lyngby,  DENMARK

+33 1 69 08 29 51 / +33  6 47 53 66 52  (mobile)   (Fr)
+4545 25 63 12/  +45 28 72 55 18  (mobile)  (Dk)
email: cyrille.barret...@cea.fr  /  cyr...@nanotech.dtu.dk
Web: http://iramis.cea.fr/Pisp/cyrille.barreteau/
 ---

De : pw_forum-boun...@pwscf.org [pw_forum-boun...@pwscf.org] de la part de 
Haricharan Padmanabhan [hari00...@gmail.com]
Envoyé : mardi 21 octobre 2014 10:43
À : pw_forum@pwscf.org
Objet : [Pw_forum] Convergence of Magnetization in Graphene Monovacancy 
Supercell

Dear Quantum ESPRESSO users,

I am attempting to estimate the value of the magnetism in Graphene with a 
mono-vacancy, using supercells of different sizes.

Some background -

- One would expect (from literature) the magnetism to converge to around 1.5 
bohr magnetons (uB) as the supercell size is increased.

- Since vacancies result in localized states at the Fermi level (flat bands, or 
peaks in the DOS), a dense k-point mesh is usually required to accurately 
estimate (N.up - N.down), and hence the magnetism.

I first obtained convergence with respect to k-point sampling, for a 4x4 
supercell (31 atoms + 1 vacancy)

K-point meshTotal Energy (Ry)
Total magnetization (uB)

16x16   -355.5861.29

20x20   -355.5861.21

24x24   -355.5861.25

32x32   -355.5861.27

36x36   -355.5861.27


A larger 6x6 supercell (71 atoms + 1 vacancy), by conventional wisdom, would 
require a less dense k-point mesh for convergence. However, even with a dense 
32x32 k-point mesh, I get a non-converged value of 0.59 uB for the magnetism. 
Different calculations with different k-point meshes give me values that 
oscillate between 0.59 and 1.45 uB, with no apparent pattern. It does not make 
sense to me to further increase the k-point mesh density.

Clearly, the flat bands at the Fermi level are causing trouble depending on 
whether they've been bumped slightly above or below the Fermi level, due to 
inadequate k-point sampling in different calculations. How can I fix this 
problem? Will doing a manual k-point sampling help?



A part of the input file -


 
ibrav=  4, celldm(1) =27.9, celldm(3) = 1, nat=  71, ntyp= 1,
ecutwfc =30.0,
ecutrho = 250.0,
occupations='smearing', smearing='gaussian', degauss=0.001
nspin = 2,  starting_magnetization(1)=0.7
 /
 
diagonalization='cg'
mixing_mode = 'plain'
mixing_beta = 0.1
conv_thr =  1.0d-6
electron_maxstep = 200
 /

ATOMIC_SPECIES
 C 12.011  c_pbe_v1.2.uspp.F.UPF

K_POINTS {automatic}
  32 32 1 0 0 0


A part of the output file -


 the Fermi energy is-1.9682 ev

 total energy  =-815.17816366 Ry
 Harris-Foulkes estimate   =-815.17815922 Ry
 estimated scf accuracy<   0.0077 Ry

 The total energy is the sum of the following terms:

 one-electron contribution =   -5427.83442348 Ry
 hartree contribution  =2763.25828072 Ry
 xc contribution   =-257.55564014 Ry
 ewald contribution=2106.95386447 Ry
 smearing contrib. (-TS)   =  -0.00024524 Ry

 total magnetization   = 0.59 Bohr mag/cell
 absolute magnetization= 0.79 Bohr mag/cell


Thank you.

Haricharan Padmanabhan

Indian Institute of Technology Madras
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] Quantum Espresso for Solid-Solid interfaces?

2014-10-21 Thread daniel idukkala Idukkala
Hi All,

I would like to know whether Quantum Espresso can be used to study the
interfaces of a solar cell.

Thanks in advance for the replies.

Regards
Daniel
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] Error in davcio running head.x

2014-10-21 Thread Valentina Cantatore
Dear QE users,

I'm here to ask again for your help because I have problems running head.x
program.

The program stops with the error:
"Error in routine davcio (25): error while writing from file
"*/./out/_ph0/*.prd38".

I had a similar problem with a calculation on a molecular system and you
suggested me to reduce the cutoffs. It worked in that case.

Now I have significantly reduce them but I have the same problem. I use
Quantum Espresso 5.1 and I work with 64 CPUs.

I attach the input files for the pwx calculation and for the head one.

Any help will be really appreciated.

Thank you very much

Valentina Cantatore
PostDoc@Università del Piemonte Orientale, Alessandria


MAPbI3_beta_20_100_head.inp
Description: Binary data


MAPbI3_beta_20_100_scf.inp
Description: Binary data
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] nqx=1 in input but full grid in output

2014-10-21 Thread Lorenzo Paulatto


On 10/21/2014 04:49 PM, Roberto Gaspari wrote:


EXX: grid of k+q point setup nkqs = 42

that is the grid for the Fock operator is as dense as the regular 
Monkhorst-Pack grid.


Does all this correspond to an expected behavior of pwscf?


Yes it does. Well, I think it does, you do not say how many irreducible 
k-point you have in your system.


nqX=1 means that wfcs at each kpoint only exchange with wfcs at the same 
kpoint. What you ar thinking is having wfcs at any k-point only exchange 
with wfcs at the Gamma point.


A big limiting factor with EXX calculations is that CPU time scales with 
the square of the number of k-points. By setting nqX to a fixed value 
the scaling becomes linear again when nk is bigger than nq.



--
Dr. Lorenzo Paulatto
IdR @ IMPMC -- CNRS & Université Paris 6
+33 (0)1 44 275 084 / skype: paulatz
http://www.impmc.upmc.fr/~paulatto/
23-24/4é16 Boîte courrier 115, 4 place Jussieu 75252 Paris Cédex 05

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Issue while executing QE-5.0 GPU

2014-10-21 Thread Nisha Agrawal
Hi Filippo,

As per your suggestion I ran the not-accelerated version of QE for same
input data , it worked well for me.
Following  is the contents of make.sys file, used for QE GPU compilation.
Please let me know what are the other details
required to help me in this issue.

#

MANUAL_DFLAGS  = -D__ISO_C_BINDING -D__DISABLE_CUDA_NEWD
 -D__DISABLE_CUDA_ADDUSDENS
DFLAGS =  -D__INTEL -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK
-D__CUDA -D__PHIGEMM -D__OPENMP $(MANUAL_DFLAGS)
FDFLAGS= $(DFLAGS)

# IFLAGS = how to locate directories where files to be included are
# In most cases, IFLAGS = -I../include

IFLAGS = -I../include
 -I/opt/app/espresso-5.0.2-gpu-14.03/espresso-5.0.2/GPU/..//phiGEMM/include
-I/opt/CUDA-5.5/include

# MOD_FLAGS = flag used by f90 compiler to locate modules
# Each Makefile defines the list of needed modules in MODFLAGS

MOD_FLAG  = -I

# Compilers: fortran-90, fortran-77, C
# If a parallel compilation is desired, MPIF90 should be a fortran-90
# compiler that produces executables for parallel execution using MPI
# (such as for instance mpif90, mpf90, mpxlf90,...);
# otherwise, an ordinary fortran-90 compiler (f90, g95, xlf90, ifort,...)
# If you have a parallel machine but no suitable candidate for MPIF90,
# try to specify the directory containing "mpif.h" in IFLAGS
# and to specify the location of MPI libraries in MPI_LIBS

MPIF90 = mpiifort
#F90   = ifort
CC = mpiicc
F77= mpiifort

# C preprocessor and preprocessing flags - for explicit preprocessing,
# if needed (see the compilation rules above)
# preprocessing flags must include DFLAGS and IFLAGS

CPP= cpp
CPPFLAGS   = -P -traditional $(DFLAGS) $(IFLAGS)

# compiler flags: C, F90, F77
# C flags must include DFLAGS and IFLAGS
# F90 flags must include MODFLAGS, IFLAGS, and FDFLAGS with appropriate
syntax

CFLAGS = -DMKL_ILP64 -O3 $(DFLAGS) $(IFLAGS)
F90FLAGS   = $(FFLAGS) -nomodule -fpp $(FDFLAGS) $(IFLAGS) $(MODFLAGS)
FFLAGS = -i8 -O2 -assume byterecl -g -traceback -par-report0
-vec-report0

# compiler flags without optimization for fortran-77
# the latter is NEEDED to properly compile dlamch.f, used by lapack

FFLAGS_NOOPT   = -i8 -O0 -assume byterecl -g -traceback

# compiler flag needed by some compilers when the main is not fortran
# Currently used for Yambo

FFLAGS_NOMAIN   = -nofor_main

# Linker, linker-specific flags (if any)
# Typically LD coincides with F90 or MPIF90, LD_LIBS is empty

LD = mpiifort
LDFLAGS=  -ilp64
LD_LIBS= -L/opt/CUDA-5.5/lib64 -lcublas  -lcufft -lcudart

# External Libraries (if any) : blas, lapack, fft, MPI

# If you have nothing better, use the local copy :
# BLAS_LIBS = /your/path/to/espresso/BLAS/blas.a
# BLAS_LIBS_SWITCH = internal

BLAS_LIBS  =
/opt/app/espresso-5.0.2-gpu-14.03/espresso-5.0.2/GPU/..//phiGEMM/lib/libphigemm.a
 -L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64 -lmkl_intel_ilp64
-lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm
BLAS_LIBS_SWITCH = external

# If you have nothing better, use the local copy :
# LAPACK_LIBS = /your/path/to/espresso/lapack-3.2/lapack.a
# LAPACK_LIBS_SWITCH = internal
# For IBM machines with essl (-D__ESSL): load essl BEFORE lapack !
# remember that LAPACK_LIBS precedes BLAS_LIBS in loading order

# CBLAS is used in case the C interface for BLAS is missing (i.e. ACML)
CBLAS_ENABLED = 0

LAPACK_LIBS=
LAPACK_LIBS_SWITCH = external

ELPA_LIBS_SWITCH = disabled
SCALAPACK_LIBS = -lmkl_scalapack_ilp64 -lmkl_blacs_intelmpi_ilp64

# nothing needed here if the the internal copy of FFTW is compiled
# (needs -D__FFTW in DFLAGS)

FFT_LIBS   = -L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64

# For parallel execution, the correct path to MPI libraries must
# be specified in MPI_LIBS (except for IBM if you use mpxlf)

MPI_LIBS   =

# IBM-specific: MASS libraries, if available and if -D__MASS is defined in
FDFLAGS

MASS_LIBS  =

# ar command and flags - for most architectures: AR = ar, ARFLAGS = ruv

AR = ar
ARFLAGS= ruv

# ranlib command. If ranlib is not needed (it isn't in most cases) use
# RANLIB = echo

RANLIB = ranlib

# all internal and external libraries - do not modify

FLIB_TARGETS   = all

# CUDA section
NVCC = /opt/CUDA-5.5/bin/nvcc
NVCCFLAGS= -O3  -gencode arch=compute_20,code=sm_20 -gencode
arch=compute_20,code=sm_21

PHIGEMM_INTERNAL = 1
PHIGEMM_SYMBOLS  = 1
MAGMA_INTERNAL   = 0

LIBOBJS= ../flib/ptools.a ../flib/flib.a ../clib/clib.a
../iotk/src/libiotk.a
LIBS   = $(SCALAPACK_LIBS) $(LAPACK_LIBS) $(FFT_LIBS) $(BLAS_LIBS)
$(MPI_LIBS) $(MASS_LIBS) $(LD_LIBS)

# wget or curl - useful to download from network
WGET = wget -O

##
















"Apologizing 

[Pw_forum] nqx=1 in input but full grid in output

2014-10-21 Thread Roberto Gaspari
Dear all,

I am performing my first hybrid functional calculations with PWSCF.
I was trying to simulate a quite large system (24 at/unit cell  
volume=2082.0010 a.u.^3)
on a large machine (CINECA-Fermi). The scf appears quite slow so I was trying 
to test the convergence of the grid density for the Fock operator, to see if I 
can be any faster without losing too much accuracy.
Following the EXX examples I set in the  section


  
   ecutwfc = 70,
   ecutrho = 280,
   occupations = 'smearing',
   degauss = 0.01,
   input_dft='pbe0', nqx1 = 1, nqx2 = 1, nqx3 = 1,
 /

and for the regular Monkhorst-Pack grid:

/
K_POINTS (automatic)
3 3 3 1 1 1


In the output file I obtain a Monkhorst-Pack of 42 k-points, which is fine.
Anyway I was expecting to get something like

EXX: grid of k+q point setup nkqs = 1

since, as far as I have understood, setting all nqxs to 1 amounts to performing 
a q=0 calculation

I get, instead:

EXX: grid of k+q point setup nkqs = 42

that is the grid for the Fock operator is as dense as the regular 
Monkhorst-Pack grid.

Does all this correspond to an expected behavior of pwscf?

I thank all of you for your attention,

Best Regards,

Roberto Gaspari,
Italian Institute of Technology,
Concept Lab

















___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] electric field

2014-10-21 Thread Nossa, Javier
Hi,
I am doing an optimization including an external electric field using the
SVN pwscf.
I am getting the following error after the first iteration of the second
scf geometry:
 extrapolated charge  192.39606, renormalised to  189.0
 Atomic wfc used for LDA+U Projector are NOT orthogonalized
 total cpu time spent up to now is22601.5 secs
 per-process dynamical memory:   718.8 Mb
 Self-consistent Calculation
 iteration #  1 ecut=   100.00 Ry beta=0.30
 %%
 Error in routine gk_sort (1):
 array gk out-of-bounds
 %%

 stopping ...



Here is my input file:

   prefix='job',
   calculation = "vc-relax",
   restart_mode = 'from_scratch',
   verbosity = 'high',
   tstress = .true.,
   tprnfor = .true.,
   nstep = 100,
   etot_conv_thr = 1.0d-6,
   forc_conv_thr = 1.0d-5,
   iprint = 1,
   max_seconds = 432000, ! 5 days
   lelfield=.true.,
   nberrycyc=1,
 /

ibrav= 6,
celldm(1)=15.8610666,
celldm(3)= 1.0139966,
nat=  39,
ntyp= 4,
input_dft=wc
!nbnd = 220, !189 electrons,
ecutwfc = 100.0,
!occupations='smearing', smearing='gauss', degauss=0.003,
!occupations='tetrahedra',
nspin=2,
tot_magnetization= 5.0,
!starting_magnetization(1)= 0.0,
!starting_magnetization(2)= 0.0,
!starting_magnetization(3)= 0.0,
!starting_magnetization(4)= 1.0,
lda_plus_u = .true.,
Hubbard_U(4)=6,
Hubbard_J0(4)=0.6,
!U_projection_type='file'
starting_ns_eigenvalue(1,1,4)=1.d0,
starting_ns_eigenvalue(2,1,4)=1.d0,
starting_ns_eigenvalue(3,1,4)=1.d0,
starting_ns_eigenvalue(4,1,4)=1.d0,
starting_ns_eigenvalue(5,1,4)=1.d0,
starting_ns_eigenvalue(1,2,4)=0.d0,
starting_ns_eigenvalue(2,2,4)=0.d0,
starting_ns_eigenvalue(3,2,4)=0.d0,
starting_ns_eigenvalue(4,2,4)=0.d0,
starting_ns_eigenvalue(5,2,4)=0.d0,
/

conv_thr =  1.0d-7
electron_maxstep=300,
mixing_beta=0.3,
startingwfc='random'
efield_cart(1)=0.027,efield_cart(2)=0.d0,efield_cart(3)=0.d0
!efield_phase='read'
/
 
 /
 
   cell_dynamics = 'bfgs',
/
ATOMIC_SPECIES
...
ATOMIC_POSITIONS {crystal}
...
K_POINTS {automatic}
4 4 4 0 0 0


My job script:
mpirun -np 64 /u/home/jnossa/PRO/svnespresso-5.1/bin/pw.x 
printopt


I reduced the kpoits grid to 2x2x2 but got the same error.
how can I solve this problem?

Thanks.

-- 
With best regards,
Javier Francisco Nossa

Postdoc at Geophysical Laboratory
Carnegie Institution of Washington
5251 Broad Branch Road, N.W.
Washington, DC 20015-1305
Tel.: 1.240.476.3993
E-mail: jno...@carnegiescience.edu 
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Band structure calculation with external electric field.

2014-10-21 Thread plgong
Dear Barnali Bhattacharya,
 I think you can fully relax (vc-relax) the structure without the field firstly 
to get the optimized result.
 Then, you can just relax adding the field. Lastly,  it needs to test the 
results before and after adding field
in your bands calculation. 

Best wishes

P. L. Gong
ISSP, China





> -原始邮件-
> 发件人:"Barnali Bhattacharya" 
> 发送时间:2014年10月21日 星期二
> 收件人:pw_forum@pwscf.org
> 抄送:
> 主题:[Pw_forum] Band structure calculation with external electric field.
> 
> 
> 
> Dear  QE user,
> I am a new user of QE. I  want to calculate the band structure of bi-layer 
> graphene  by employing external electric field. I have done the 
> scfcalculations without electric field, then again did the scfcalculation 
> with an electric field included in the z-direction(efield_cart (1) = 0. d0, 
> efield_cart (2)=0.d0,
> efield_cart(3) = 0.001d0,). Then I have done the nscfcalculations with 
> electric field. Now I have to calculate the bandscalculation. But I am 
> confused about the step. Now my question is–
> Ø1)  Is it necessary to optimize (vc-relaxed) the structure with an electric 
> field before performing the scfcalculation?
> Ø2) In the band calculation am I include the ‘lelfield=.true.’ option?
> 
> Could anyone please guide me and share their experience?
> I am waiting for positive response
> Thanking you in advance.
> 
> Sincerely 
> barnali

--


Addr: Institute of Solid State Physics, Chinese Academy of 
Sciences, Hefei, Anhui 230031, China
Tel: +86-551-65591591(office), 18756086113(cell phone)
Email: plg...@theory.issp.ac.cn






___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Convergence of Magnetization in Graphene Monovacancy Supercell

2014-10-21 Thread Lorenzo Paulatto
On 10/21/2014 10:43 AM, Haricharan Padmanabhan wrote:
> Clearly, the flat bands at the Fermi level are causing trouble 
> depending on whether they've been bumped slightly above or below the 
> Fermi level, due to inadequate k-point sampling in different 
> calculations. How can I fix this problem? Will doing a manual k-point 
> sampling help?

Are you increasing the inter-layer space in the larger cell? I cannot 
tell from the tiny bit of input file you provide.

If you do, don't. Too much vacuum space makes the calculation difficult 
to converge and you may even get electrons in the vacuum. About 6 or 7 
Angstroms of vacuum is enough.

best regards


-- 
Dr. Lorenzo Paulatto
IdR @ IMPMC -- CNRS & Université Paris 6
+33 (0)1 44 275 084 / skype: paulatz
http://www.impmc.upmc.fr/~paulatto/
23-24/4é16 Boîte courrier 115, 4 place Jussieu 75252 Paris Cédex 05

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Electric field in silicene

2014-10-21 Thread plgong
I think the Ecut is small. Did you test the its convergence to total energy? 
Otherwise, your large electric field maybe lead to the convergence harder.  Try 
large Ecut and small field, good luck!

在2014-10-21 18:11:15,plgongplg...@theory.issp.ac.cn写道:

Dear all,

I am wanting to get the Dos of silicene under the effect of an external 
electric field. I have done the scf calculations without electric field , then 
again did the scf calculation with an electric field included in the 
z-direction (with value 0.008ua). But the convergence is not achieved and it 
stopped after 800 iterations and giving it this message:



"total cpu time spent up to now is 20409.6 secs

total energy = -64.27063591 Ry
Harris-Foulkes estimate = -62.93460760 Ry
estimated scf accuracy < 0.3422 Ry

End of self-consistent calculation

convergence NOT achieved after 800 iterations: stopping"





You will find below  in file for the scf calculation when an electric field is 
applied:

calculation='scf'
restart_mode='from_scratch',
prefix='elec0.08',
lelfield=.true.,
nberrycyc=3
pseudo_dir ='/home/siham/Desktop/espresso-5.0.1-GPU/pseudo',
outdir='/home/siham/Desktop/espresso-5.0.1-GPU/tmp'
/

ibrav= 1, celldm(1)=10.18, nat= 8, ntyp= 1,
ecutwfc = 20.0
/

electron_maxstep=800,
diagonalization='david',
conv_thr = 1.0d-8,
mixing_beta = 0.5,
startingwfc='random',
efield_cart(1)=0.d0,efield_cart(2)=0.d0,efield_cart(3)=0.008d0
/
ATOMIC_SPECIES
Si 28.086 Si.pbe-rrkj.UPF
ATOMIC_POSITIONS
Si -0.125 -0.125 -0.125
Si 0.375 0.375 -0.125
Si 0.375 -0.125 0.375
Si -0.125 0.375 0.375
Si 0.125 0.125 0.125
Si 0.625 0.625 0.125
Si 0.625 0.125 0.625
Si 0.125 0.625 0.625
K_POINTS {automatic}
3 3 7 0 0 0


Thanks in advance


 

 
 


 






 

 



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] Band structure calculation with external electric field.

2014-10-21 Thread Barnali Bhattacharya
Dear  QE user,

I am a new user of QE. I  want to calculate the band structure of bi-layer
graphene  by employing external electric field. I have done the scf
calculations without electric field, then again did the scf calculation
with an electric field included in the z-direction (efield_cart (1) = 0.
d0, efield_cart (2)=0.d0,

  efield_cart(3) = 0.001d0,). Then I have done the nscf
calculations with electric field. Now I have to calculate the bands
calculation. But I am confused about the step. Now my question is –

Ø 1)  Is it necessary to optimize (vc-relaxed) the
structure with an electric field before performing the scf calculation?

Ø 2) In the band calculation am I include the ‘lelfield = .
true.’ option?



Could anyone please guide me and share their experience?

I am waiting for positive response

  Thanking you in advance.



Sincerely

barnali
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] Electric field in silicene

2014-10-21 Thread siham Sadki
Dear all,

I am wanting to get the Dos of silicene under the effect of an external 
electric field. I have done the scf calculations without electric field , then 
again did the scf calculation with an electric field included in the 
z-direction (with value 0.008ua). But the convergence is not achieved and it 
stopped after 800 iterations and giving it this message:



"total cpu time spent up to now is 20409.6 secs 

total energy = -64.27063591 Ry 
Harris-Foulkes estimate = -62.93460760 Ry 
estimated scf accuracy < 0.3422 Ry 

End of self-consistent calculation 

convergence NOT achieved after 800 iterations: stopping"





You will find below  in file for the scf calculation when an electric field is 
applied:
 
calculation='scf' 
restart_mode='from_scratch', 
prefix='elec0.08', 
lelfield=.true., 
nberrycyc=3 
pseudo_dir ='/home/siham/Desktop/espresso-5.0.1-GPU/pseudo', 
outdir='/home/siham/Desktop/espresso-5.0.1-GPU/tmp' 
/ 
 
ibrav= 1, celldm(1)=10.18, nat= 8, ntyp= 1, 
ecutwfc = 20.0 
/ 
 
electron_maxstep=800, 
diagonalization='david', 
conv_thr = 1.0d-8, 
mixing_beta = 0.5, 
startingwfc='random', 
efield_cart(1)=0.d0,efield_cart(2)=0.d0,efield_cart(3)=0.008d0 
/ 
ATOMIC_SPECIES 
Si 28.086 Si.pbe-rrkj.UPF 
ATOMIC_POSITIONS 
Si -0.125 -0.125 -0.125 
Si 0.375 0.375 -0.125 
Si 0.375 -0.125 0.375 
Si -0.125 0.375 0.375 
Si 0.125 0.125 0.125 
Si 0.625 0.625 0.125 
Si 0.625 0.125 0.625 
Si 0.125 0.625 0.625 
K_POINTS {automatic} 
3 3 7 0 0 0


Thanks in advance





 

 



 







 







 





 
  ___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] Convergence of Magnetization in Graphene Monovacancy Supercell

2014-10-21 Thread Haricharan Padmanabhan
Dear Quantum ESPRESSO users,

I am attempting to estimate the value of the magnetism in Graphene with a
mono-vacancy, using supercells of different sizes.

Some background -

- One would expect (from literature) the magnetism to converge to around
1.5 bohr magnetons (uB) as the supercell size is increased.

- Since vacancies result in localized states at the Fermi level (flat
bands, or peaks in the DOS), a dense k-point mesh is usually required to
accurately estimate (N.up - N.down), and hence the magnetism.

I first obtained convergence with respect to k-point sampling, for a 4x4
supercell (31 atoms + 1 vacancy)

   K-point mesh Total Energy (Ry)
Total magnetization (uB)
 16x16 -355.586 1.29
 20x20 -355.586 1.21
 24x24 -355.586 1.25
 32x32 -355.586 1.27
 36x36 -355.586 1.27

A larger 6x6 supercell (71 atoms + 1 vacancy), by conventional wisdom,
would require a less dense k-point mesh for convergence. However, even with
a dense 32x32 k-point mesh, I get a non-converged value of 0.59 uB for the
magnetism. Different calculations with different k-point meshes give me
values that oscillate between 0.59 and 1.45 uB, with no apparent pattern.
It does not make sense to me to further increase the k-point mesh density.

Clearly, the flat bands at the Fermi level are causing trouble depending on
whether they've been bumped slightly above or below the Fermi level, due to
inadequate k-point sampling in different calculations. How can I fix this
problem? Will doing a manual k-point sampling help?



A part of the input file -


 
ibrav=  4, celldm(1) =27.9, celldm(3) = 1, nat=  71, ntyp= 1,
ecutwfc =30.0,
ecutrho = 250.0,
occupations='smearing', smearing='gaussian', degauss=0.001
nspin = 2,  starting_magnetization(1)=0.7
 /
 
diagonalization='cg'
mixing_mode = 'plain'
mixing_beta = 0.1
conv_thr =  1.0d-6
electron_maxstep = 200
 /

ATOMIC_SPECIES
 C 12.011  c_pbe_v1.2.uspp.F.UPF

K_POINTS {automatic}
  32 32 1 0 0 0


A part of the output file -


 the Fermi energy is-1.9682 ev

 total energy  =-815.17816366 Ry
 Harris-Foulkes estimate   =-815.17815922 Ry
 estimated scf accuracy<   0.0077 Ry

 The total energy is the sum of the following terms:

 one-electron contribution =   -5427.83442348 Ry
 hartree contribution  =2763.25828072 Ry
 xc contribution   =-257.55564014 Ry
 ewald contribution=2106.95386447 Ry
 smearing contrib. (-TS)   =  -0.00024524 Ry

 total magnetization   = 0.59 Bohr mag/cell
 absolute magnetization= 0.79 Bohr mag/cell


Thank you.

Haricharan Padmanabhan

Indian Institute of Technology Madras
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum