[QE-users] Non-collinear magnetism+spin-orbit coupling scf convergence

2022-08-02 Thread Zeeshan Ahmad
Hi all,

I am trying to perform a non-collinear magnetic + spin orbit coupling 
calculation of an odd no of electron system obtained by adding a neutral iodine 
interstitial in methyl ammonium lead iodide. The SCF cycle converges readily in 
abinit and VASP but not in quantum espresso. Examples of input and output files 
used for abinit, qe and VASP are at this link: 
https://gitlab.com/ahzeeshan/qe-issues/-/tree/master/mag-soc-conv

The system converges in qe when either non-collinear magnetization or 
spin-orbit coupling are used alone in the input file but not both. It also 
converges when I specify input_dft=‘pz’. I am using pseudo dojo NC 
pseudopotentials. A non-collinear PBE calculation without spin-orbit coupling 
showed that the magnetic state is slightly lower in energy than the 
non-magnetic state for this system (qe-noSOC folder). 

I have tried using initial density and wavefunctions from non-collinear 
calculation without SOC or from converged non-collinear calculation with SOC 
(obtained using input_dft = ‘pz'), changing the eigensolver, mixing, starting 
magnetization, adding gaussian smearing among others. Any suggestions for 
obtaining the convergence will be appreciated.


Thanks,
Zeeshan

--
Zeeshan Ahmad
Postdoctoral Scholar
Pritzker School of Molecular Engineering
The University of Chicago
Web: https://ahzeeshan.github.io <https://ahzeeshan.github.io/>
___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] k point symmetry and open_grid.x error

2022-02-05 Thread Zeeshan Ahmad
Hi,

The open_grid.x code with automatic grid of k points in the scf input file 
produces k points between -0.5 and 0.5 (crystal coords). However, the EPW code 
requires wave functions for full set of k points from 0 to 1 in the first BZ 
and gives an error when using k points between -0.5 and 0.5: 

Error in routine epw_setup (1):
coarse k-mesh needs to be strictly positive in 1st BZ 

Unfortunately this is a hybrid functional calculation, so I cannot perform nscf 
calculation or use all k points in scf due to high memory requirement.

Therefore, I tried using the kpoints.x code in the PW/tools to generate IBZ k 
points between 0 and 1 and specified the weights in scf file. On running 
open_grid.x on this calculation, I get the error: 

Error in routine exx_grid_init (1):
 wrong EXX q grid

Is it not possible to use open_grid.x other than for automatically generated 
k-points (runs fine in that case)? And is there a way to generate the files for 
full set of k-points between 0 and 1 instead of -0.5 and 0.5?

The files for reproducing the error are at: 
https://gitlab.com/ahzeeshan/qe-issues/-/tree/master/open_grid 
<https://gitlab.com/ahzeeshan/qe-issues/-/tree/master/open_grid>

I used a 4 by 4 by 2 grid of k-points to generate the IBZ using kpoints.x code.

Thanks for your help,
Zeeshan




--
Zeeshan Ahmad
Postdoctoral Scholar
Pritzker School of Molecular Engineering
The University of Chicago

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] scf calculation end without a crash file

2021-05-22 Thread Zeeshan Ahmad
Hi,

Your memory requirement from output file seems too high >4-5 GB per process. 
Are you sure you have that much memory? You can try running with reduced k 
point parallelization to reduce memory requirement.


Zeeshan

--
Zeeshan Ahmad
Postdoctoral Researcher
Pritzker School of Molecular Engineering
University of Chicago

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] [QE-GPU] configure error

2021-03-11 Thread Zeeshan Ahmad
Hi,I am trying to install quantum espresso 6.7-gpu for V100 gpu using the configure command: > export CUDADIR=/opt/packages/pgi/20.11/Linux_x86_64/20.11/cuda> ./configure CC=pgcc F77=pgf90 F90=pgf90 FC=pgf90 MPIF90=mpif90 --with-cuda=$CUDADIR --with-cuda-runtime=11.1 --with-cuda-cc=70 --enable-openmp --with-scalapack=no LIBS="-L$CUDADIR/lib64/stubs/ -L$CUDADIR/lib64/“The $CUDADIR/lib64/stubs directory contains libcuda.so but I still get the following error (cuInit missing?):checking build system type... x86_64-pc-linux-gnuchecking ARCH... x86_64checking setting AR... ... archecking setting ARFLAGS... ... ruvchecking whether the Fortran compiler works... yeschecking for Fortran compiler default output file name... a.outchecking for suffix of executables... checking whether we are cross compiling... nochecking for suffix of object files... ochecking whether we are using the GNU Fortran compiler... nochecking whether pgf90 accepts -g... yesconfigure: WARNING: F90 value is set to be consistent with value of MPIF90checking for mpif90... mpif90checking whether we are using the GNU Fortran compiler... nochecking whether mpif90 accepts -g... yeschecking version of mpif90... nvfortran 20.11-0checking for Fortran flag to compile .f90 files... nonesetting F90... nvfortransetting MPIF90... mpif90checking whether we are using the GNU C compiler... yeschecking whether pgcc accepts -g... yeschecking for pgcc option to accept ISO C89... none neededsetting CC... pgccsetting CFLAGS... -fast -Mpreprocessusing F90... nvfortransetting FFLAGS... -O1setting F90FLAGS... $(FFLAGS)setting FFLAGS_NOOPT... -O0setting CPP... cppsetting CPPFLAGS... -P -traditional -Uvectorsetting LD... mpif90setting LDFLAGS...checking for Fortran flag to compile .f90 files... (cached) nonechecking whether Fortran compiler accepts -Mcuda=cuda11.1... yeschecking for nvcc... /opt/packages/pgi/20.11/Linux_x86_64/20.11/compilers/bin/nvccchecking whether nvcc works... yeschecking for cuInit in -lcuda... noconfigure: error: in `/ocean/projects/phy200043p/azeeshan/software/qe/6.7-gpu':configure: error: Couldn't find libcudaSee `config.log' for more details

config.log
Description: Binary data

--Zeeshan AhmadPostdoctoral ResearcherPritzker School of Molecular EngineeringUniversity of Chicago

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] workaround for projwfc with SG15 ONCV

2021-02-19 Thread Zeeshan Ahmad
This worked, thanks Paolo. I set the value of natomwfc inside the projwave 
function.

Also in my original post had an error:
"Starting wfcs are  112 randomized atomic wfcs +  108 random wfcs”
should be 
"Starting wfcs are  128 randomized atomic wfcs +  124 random wfcs”
since natomwfc was 128.

--
Zeeshan Ahmad
Postdoctoral Researcher
Pritzker School of Molecular Engineering
University of Chicago

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] workaround for projwfc with SG15 ONCV

2021-02-18 Thread Zeeshan Ahmad
Hi,

I ran a calculation using SG15 ONCV pseudopotentials (for one of the elements) 
which don't have the PSWFC section for projwfc.x. My system contains Pb and I 
atoms, out of the pseudopotential for I was taken from 
https://github.com/pipidog/ONCVPSP <https://github.com/pipidog/ONCVPSP> (FR 
SG15, contains the PSWFC section) but the Pb psuedopotential was 
Pb_ONCV_PBE_FR-1.0.upf and does not contain PSWFC section.

To get projwfc running (without rerunning the expensive hybrid scf 
calculation), I regenerated the FR pseudopotential for Pb with oncvpsp-3.3.1 
version and with the same mesh size (There is still difference in 
total_psenergy in PP_HEADER). Using this pseudopotential, I added the PSWFC 
section in the used Pb pseudopotential located in outdir/prefix.save/. However, 
when I run projwfc.x, I get the following error: 

Error in routine fill_nlmchi (1):
 wrong # of atomic wfcs

On checking, the value of nwfc is 256 and the value of natomwfc is 128, equal 
to the value from beginning of output file:

"Starting wfcs are  112 randomized atomic wfcs +  108 random wfcs”

So, it seems there is no possibility of hacking my way into running projwfc.x 
and I would have to rerun the scf calculation with the new pseuodopotential?

Also, when I run projwfc.x without adding PSWFC section to Pb pseudopotential 
located in outdir/prefix.save/, the projwfc runs without giving an error but it 
assumes all atoms are I. Is this a bug?


Thanks,
Zeeshan

--
Zeeshan Ahmad
Postdoctoral Researcher
Pritzker School of Molecular Engineering
University of Chicago

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] vc-relax and temperature

2021-02-11 Thread Zeeshan Ahmad
Hi Sergey,

The ion_temperature flag only works with molecular dynamics, not with vc-relax 
which is used to find the optimum cell at 0 K. If you want temperature control 
and variable cell, you should use calculation = ‘vc-md’

Zeeshan


--
Zeeshan Ahmad
Postdoctoral Researcher
Pritzker School of Molecular Engineering
University of Chicago

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Memory requirement for open_grid.x

2021-01-13 Thread Zeeshan Ahmad
I managed to run this using 2 cores instead of all available cores (took only 
30 mins), since Lorenzo I think had suggested earlier in the forum that it is a 
pain to run open_grid.x in parallel. Still would be interested to know if there 
are other ways of reducing memory requirement.

Zeeshan

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users


[QE-users] Memory requirement for open_grid.x

2021-01-13 Thread Zeeshan Ahmad
Hi,

I am working with a ~800 electron system and the memory requirement seems to be 
> 1 TB, much higher than that of the scf calculation itself (~130 GB). Is this 
expected? Are there ways to reduce the memory requirement?

I am running open_grid.x using mpirun: mpirun -np ncores open_grid.x -i 
opengrid.in > opengrid.out

with ~4 GB/core memory.

Thanks,
Zeeshan
--
Zeeshan Ahmad
Postdoctoral Researcher
Pritzker School of Molecular Engineering
University of Chicago

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Nonlinear Core Correction

2020-12-28 Thread Zeeshan Ahmad
Hi Andrew,

This thread should be helpful 
https://www.mail-archive.com/users@lists.quantum-espresso.org/msg35142.html 
<https://www.mail-archive.com/users@lists.quantum-espresso.org/msg35142.html>

Considering the current state of hybrid pseudopotentials, it should be fine to 
go ahead with your nlcc pseudos.


Zeeshan
--
Zeeshan Ahmad
Postdoctoral Researcher
Pritzker School of Molecular Engineering
University of Chicago

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] k-point parallelization with localization_thr>0

2020-10-30 Thread Zeeshan Ahmad
Hi Lorenzo,

The traceback points to error in the following lines of code. I’m using the 6.6 
version:


 Pairs(full):   64   Pairs(included):   32   Pairs(%):  
  50.00
#0  0x7f7e1e1775cf in ???
#0  0x7ef9143ec5cf in ???
#0  0x7f1004e455cf in ???
#0  0x7f53ff1115cf in ???
#1  0x53ccde in __loc_scdm_k_MOD_absovg_k
at 
/pylon5/phy200043p/azeeshan/software/qe/6.6-gcc_openmpi/PW/src/loc_scdm_k.f90:263
#2  0x53da51 in __loc_scdm_k_MOD_localize_orbitals_k
at 
/pylon5/phy200043p/azeeshan/software/qe/6.6-gcc_openmpi/PW/src/loc_scdm_k.f90:94
#3  0x4197c7 in electrons_
at 
/pylon5/phy200043p/azeeshan/software/qe/6.6-gcc_openmpi/PW/src/electrons.f90:190
#4  0x533b40 in run_pwscf_
at 
/pylon5/phy200043p/azeeshan/software/qe/6.6-gcc_openmpi/PW/src/run_pwscf.f90:144
#5  0x407565 in pwscf
at 
/pylon5/phy200043p/azeeshan/software/qe/6.6-gcc_openmpi/PW/src/pwscf.f90:106
#6  0x40729c in main
at 
/pylon5/phy200043p/azeeshan/software/qe/6.6-gcc_openmpi/PW/src/pwscf.f90:40


___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] k-point parallelization with localization_thr>0

2020-10-29 Thread Zeeshan Ahmad
Hi,

I got a segmentation fault when I tried to parallelize over k-points using 
localization_thr > 0 when they are specified in crystal. Works fine with 
automatic. Is this the expected behavior?

Here is an input file for reproducing. I tried -nk 2 in both cases, -nk 4 gives 
an error even with automatic (both -nk 2 and 4 work fine without 
localization_thr).

 
calculation = 'scf'
restart_mode='from_scratch',
prefix='Si-HSE',
pseudo_dir = '/home/azeeshan/pseudopot/sg15_ONCV/',
outdir='./si/'
 /
 
ibrav=  2, celldm(1) =10.20, nat=  2, ntyp= 1,
ecutwfc =30.0,  nbnd = 8,
input_dft='hse', 
nqx1 = 1, nqx2 = 1, nqx3 = 1, 
x_gamma_extrapolation = .true., 
 localization_thr = 0.002
exxdiv_treatment = 'gygi-baldereschi',
nosym = .true., noinv = .true
 /
 
mixing_beta = 0.7
 /
ATOMIC_SPECIES
 Si  28.086  Si_ONCV_PBE-1.1.upf
ATOMIC_POSITIONS alat
 Si 0.00 0.00 0.00 
 Si 0.25 0.25 0.25 
K_POINTS crystal
8
  0.  0.  0.  1.25e-01
  0.  0.  0.5000  1.25e-01
  0.  0.5000  0.  1.25e-01
  0.  0.5000  0.5000  1.25e-01
  0.5000  0.  0.  1.25e-01
  0.5000  0.  0.5000  1.25e-01
  0.5000  0.5000  0.  1.25e-01
  0.5000  0.5000  0.5000  1.25e-01


Thanks,
Zeeshan
--
Zeeshan Ahmad
Postdoctoral Researcher
Pritzker School of Molecular Engineering
University of Chicago

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] pseudopotential file reading error in 6.6a2-gpu

2020-10-23 Thread Zeeshan Ahmad
That worked, thanks Paolo.

Zeeshan

--
Zeeshan Ahmad
Postdoctoral Researcher
Pritzker School of Molecular Engineering
University of Chicago

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] pseudopotential file reading error in 6.6a2-gpu

2020-10-22 Thread Zeeshan Ahmad
Hi,

I obtain the following error (only in the gpu version) when I change my ONCV 
pseudopotential to fully relativistic:

FIO-F-225/list-directed read/internal file/lexical error-- unknown token type.
 In source file xmltools.f90, at line number 107

It seems to be due to the pseudopotential file reading error, similar to 
https://www.vasp.at/forum/viewtopic.php?t=330 
<https://www.vasp.at/forum/viewtopic.php?t=330>

The pseudopotentials are SG15 ONCV downloaded from 
http://www.quantum-simulation.org/potentials/sg15_oncv/ 
<http://www.quantum-simulation.org/potentials/sg15_oncv/> 
(sg15_oncv_upf_2020-02-06.tar.gz 
<http://www.quantum-simulation.org/potentials/sg15_oncv/sg15_oncv_upf_2020-02-06.tar.gz>)

The input file which gives the error is:
(works fine for me when Pb_ONCV_PBE_FR-1.0.upf is replaced with 
Pb_ONCV_PBE-1.2.upf)


   title = '9009114.cif'
 calculation = 'scf'
restart_mode = 'from_scratch'
  outdir = 'scf'
  pseudo_dir = './'
  prefix = '9009114'
 disk_io = 'none'
   nstep = 400
 /
 
   ibrav = 4
   celldm(1) =8.60770253529410
   celldm(3) =1.53172338090011
 nat = 3
ntyp = 2
 ecutwfc = 70
 /
 
electron_maxstep = 200
conv_thr = 1.0D-10
 mixing_beta = 0.7
 diagonalization = 'david'
 /

ATOMIC_SPECIES
   Pb  207.20  Pb_ONCV_PBE_FR-1.0.upf
I  126.904000  I_ONCV_PBE-1.2.upf
ATOMIC_POSITIONS crystal
Pb  0.0  0.0  0.0
I   0.3  0.7  0.26500
I   0.7  0.4  0.73500
K_POINTS automatic
8  8  6   0 0 0


Thanks,
Zeeshan




--
Zeeshan Ahmad
Postdoctoral Researcher
Pritzker School of Molecular Engineering
University of Chicago

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] wrong celldm(3) with ibrav -13 in phonon

2020-08-04 Thread Zeeshan Ahmad
Hi Pietro,

I built the q2r.x executable using the tar file you linked and also the develop 
version. In both cases, I got the same error again. Do I have to run the pw.x 
and ph.x again to avoid the error?

Thanks,
-Zeeshan
--
Zeeshan Ahmad
PhD candidate, Mechanical Engineering
Carnegie Mellon University
https://www.andrew.cmu.edu/~azeeshan/ <http://www.andrew.cmu.edu/user/azeeshan/>
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] wrong celldm(3) with ibrav -13 in phonon

2020-08-03 Thread Zeeshan Ahmad
Hi,

I ran a ph.x and pw.x calculation on a ibrav=-13 system. When I run the q2r.x 
in the directory, I got the following error:

 Error in routine latgen (13):
 wrong celldm(3)

After a little printing stuff inside the file Modules/latgen.f90, I found that 
the control is going to ibrav=13 (with celldm(3)<0.d0) and not ibrav=-13 as I 
specified in the scf.in file. I wonder if this has to do with the recent change 
of lattice vectors for ibrav=-13 in the 6.5 version since the pw.x and ph.x do 
not give any error.

Here are the input scf, ph and q2r input files for reproducing.

scf.in


   title = 'Li2CO3'
 calculation = 'scf'
restart_mode = 'from_scratch'
  outdir = './scf1'
  pseudo_dir = '/home/azeeshan/pseudopot/all_pbe_UPF_v1.5'
  prefix = 'Li2CO3'
 disk_io = 'low'
   verbosity = 'default'
   etot_conv_thr = 0.1
   forc_conv_thr = 0.0001
   nstep = 400
 tstress = .true.
 tprnfor = .true.
 /
 
   ibrav = -13
   celldm(1) =   15.795917182
   celldm(2) =   0.595028736 
   celldm(3) =   0.7409843951
   celldm(5) =   0.4192777936
 nat = 12
ntyp = 3
 ecutwfc = 60
 ecutrho = 600
 /
 
electron_maxstep = 200
conv_thr = 1.0D-11
  diago_thr_init = 1e-4
 startingpot = 'atomic'
 startingwfc = 'atomic'
 mixing_mode = 'plain'
 mixing_beta = 0.5
 mixing_ndim = 8
 diagonalization = 'david'
 /

ion_dynamics = 'bfgs'
 /

ATOMIC_SPECIES
   Li7.016003437  li_pbe_v1.4.uspp.F.UPF
C   12.010700  c_pbe_v1.2.uspp.F.UPF
  
O   15.999400  o_pbe_v1.2.uspp.F.UPF  

ATOMIC_POSITIONS crystal
Li 0.75080411 0.35557873 0.16417001
Li 0.64442127 0.24919589 0.66417001
Li 0.35557873 0.75080411 0.33582999
Li 0.24919589 0.64442127 0.83582999
C 0.93252757 0.93252757 0.75
C 0.06747243 0.06747243 0.25
O 0.67497784 0.67497784 0.75
O 0.32502216 0.32502216 0.25
O 0.21140108 0.91630278 0.687171
O 0.08369722 0.78859892 0.187171
O 0.91630278 0.21140108 0.812829
O 0.78859892 0.08369722 0.312829

K_POINTS automatic
8  8  6   0 0 0
——
ph.in

Li2CO3 phonon dispersion

prefix = 'Li2CO3'
tr2_ph = 1.0d-14
ldisp = .true.
nq1 = 2
nq2 = 2
nq3 = 2
outdir = './scf1'
fildyn = 'Li2CO31.dyn'
amass(1) = 7.016003437
amass(2) = 12.010700
amass(3) = 15.999400
!   recover = .true.
/
——
q2r.in


fildyn = 'Li2CO31.dyn'
zasr = 'crystal'
flfrc = 'Li2CO31.fc'
/
——


Thanks,
-Zeeshan
--
Zeeshan Ahmad
PhD candidate, Mechanical Engineering
Carnegie Mellon University
https://www.andrew.cmu.edu/~azeeshan/ <http://www.andrew.cmu.edu/user/azeeshan/>
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] restart pw.x md using cp.x outputs

2019-02-21 Thread Zeeshan Ahmad
Hi,

Is there a way to restart pw.x md calculation with output generated by cp.x 
code? (The idea is to use equilibrated trajectory generated by cp.x NVT 
simulation using Nose Hoover thermostat, since md calculation in pw.x does not 
have an implementation of that thermostat.)

The documentation says for disk_io=‘high’ option in cp.x:

> CP code will write Kohn-Sham wfc files and additional information in 
> data-file.xml in order to restart with a PW calculation or to use 
> postprocessing tools. If disk_io is not set to 'high', the data file written 
> by CP will not be readable by PW or PostProc.


I tried to restart using pw.x (changing name of the prefix_50.save file 
generated by cp.x to prefix.save file used by pw.x) but got the following error:

 %%
 Error in routine davcio (10):
 error while reading from file “/home/azeeshan/test/900/tmp/prefix.wfc1"
 %%

There was no numbered .wfc file generated by cp.x, only an empty prefix.wfc 
file is present in outdir tmp/

I am using quantum espresso 6.2.1 compiled with -D__OLDXML flag.

Thanks,
-Zeeshan

--
Zeeshan Ahmad
PhD candidate, Mechanical Engineering
Carnegie Mellon University

___
users mailing list
users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] niter_cg_restart parameter in cp.x

2019-02-13 Thread Zeeshan Ahmad
Hi all,

The explanation for ’niter_cg_restart’ parameter belonging to cp.x input in 
docs is:

“frequency in iterations for which the conjugate-gradient algorithm for 
electronic relaxation is restarted” with a  default value of 20.

Does this mean for all types of electron_dynamics like ’none’, ‘verlet’, ‘damp’ 
etc., conjugate gradient algorithm will be used to converge the wave function 
every 20 steps by default?

Thanks,
-Zeeshan

--
Zeeshan Ahmad
PhD candidate, Mechanical Engineering
Carnegie Mellon University

___
users mailing list
users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[Pw_forum] cppp.x not working with intel compiler

2017-11-01 Thread Zeeshan Ahmad
Dear users and developers,I succeeded in compiling and running cp.x using intel libraries and impi (intel-mpi) for qe-6.1 and qe-6.2. However, in both versions cp.x works but the post processing does not work i.e. cppp.x gives an error starting with (complete error attached): *** Error in `/home/azeeshan/software/qe/qe-6.1/bin/cppp.x': double free or corruption (out): 0x0229f760 ***=== Backtrace: =/usr/lib64/libc.so.6(+0x7c619)[0x7f4bfa1c1619]/usr/lib64/libc.so.6(__open_catalog+0xb6)[0x7f4bfa178e56]/usr/lib64/libc.so.6(catopen+0x48)[0x7f4bfa178ab8]/home/azeeshan/software/qe/qe-6.1/bin/cppp.x[0x4ed0d1]/home/azeeshan/software/qe/qe-6.1/bin/cppp.x[0x4f6fbd]/usr/lib64/libpthread.so.0(+0xf5e0)[0x7f4bfa8195e0]/home/azeeshan/software/qe/qe-6.1/bin/cppp.x[0x531009]/home/azeeshan/software/qe/qe-6.1/bin/cppp.x[0x52e81d]/home/azeeshan/software/qe/qe-6.1/bin/cppp.x[0x52758e]/home/azeeshan/software/qe/qe-6.1/bin/cppp.x[0x40496d]/home/azeeshan/software/qe/qe-6.1/bin/cppp.x[0x40441e]/usr/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f4bfa166c05]/home/azeeshan/software/qe/qe-6.1/bin/cppp.x[0x404329]On the other hand, cppp.x runs fine with gnu compiled version of quantum espresso 6.1. Do you have any idea what might be causing the error? I have attached the make.inc for both the gnu compiled and intel compiled versions.Thanks,Zeeshan--Zeeshan AhmadPh.D. candidate, Mechanical EngineeringCarnegie Mellon Universityhttps://www.contrib.andrew.cmu.edu/~azeeshan/
# make.inc.  Generated from make.inc.in by configure.

# compilation rules

.SUFFIXES :
.SUFFIXES : .o .c .f .f90

# most fortran compilers can directly preprocess c-like directives: use
#   $(MPIF90) $(F90FLAGS) -c $<
# if explicit preprocessing by the C preprocessor is needed, use:
#   $(CPP) $(CPPFLAGS) $< -o $*.F90
#   $(MPIF90) $(F90FLAGS) -c $*.F90 -o $*.o
# remember the tabulator in the first column !!!

.f90.o:
$(MPIF90) $(F90FLAGS) -c $<

# .f.o and .c.o: do not modify

.f.o:
$(F77) $(FFLAGS) -c $<

.c.o:
$(CC) $(CFLAGS)  -c $<



# Top QE directory, useful for locating libraries,  linking QE with plugins
# The following syntax should always point to TOPDIR:
TOPDIR = $(dir $(abspath $(filter %make.inc,$(MAKEFILE_LIST
# if it doesn't work, uncomment the following line (edit if needed):

# TOPDIR = /home/azeeshan/software/qe/qe-6.1

# DFLAGS  = precompilation options (possible arguments to -D and -U)
#   used by the C compiler and preprocessor
# FDFLAGS = as DFLAGS, for the f90 compiler
# See include/defs.h.README for a list of options and their meaning
# With the exception of IBM xlf, FDFLAGS = $(DFLAGS)
# For IBM xlf, FDFLAGS is the same as DFLAGS with separating commas

# MANUAL_DFLAGS  = additional precompilation option(s), if desired
#  BEWARE: it does not work for IBM xlf! Manually edit FDFLAGS
MANUAL_DFLAGS  =
DFLAGS =  -D__FFTW -D__MPI
FDFLAGS= $(DFLAGS) $(MANUAL_DFLAGS)

# IFLAGS = how to locate directories with *.h or *.f90 file to be included
#  typically -I../include -I/some/other/directory/
#  the latter contains .e.g. files needed by FFT libraries

IFLAGS = -I$(TOPDIR)/include -I../include/

# MOD_FLAGS = flag used by f90 compiler to locate modules
# Each Makefile defines the list of needed modules in MODFLAGS

MOD_FLAG  = -I

# Compilers: fortran-90, fortran-77, C
# If a parallel compilation is desired, MPIF90 should be a fortran-90
# compiler that produces executables for parallel execution using MPI
# (such as for instance mpif90, mpf90, mpxlf90,...);
# otherwise, an ordinary fortran-90 compiler (f90, g95, xlf90, ifort,...)
# If you have a parallel machine but no suitable candidate for MPIF90,
# try to specify the directory containing "mpif.h" in IFLAGS
# and to specify the location of MPI libraries in MPI_LIBS

MPIF90 = mpif90
#F90   = gfortran
CC = cc
F77= gfortran

# C preprocessor and preprocessing flags - for explicit preprocessing,
# if needed (see the compilation rules above)
# preprocessing flags must include DFLAGS and IFLAGS

CPP= cpp
CPPFLAGS   = -P -traditional $(DFLAGS) $(IFLAGS)

# compiler flags: C, F90, F77
# C flags must include DFLAGS and IFLAGS
# F90 flags must include MODFLAGS, IFLAGS, and FDFLAGS with appropriate syntax

CFLAGS = -O3 $(DFLAGS) $(IFLAGS)
F90FLAGS   = $(FFLAGS) -x f95-cpp-input $(FDFLAGS) $(IFLAGS) $(MODFLAGS)
FFLAGS = -O3 -g

# compiler flags without optimization for fortran-77
# the latter is NEEDED to properly compile dlamch.f, used by lapack

FFLAGS_NOOPT   = -O0 -g

# compiler flag needed by some compilers when the main program is not fortran
# Currently used for Yambo

FFLAGS_NOMAIN   = 

# Linker, linker-specific flags (if any)
# Typically LD coincides with F90 or MPIF90, LD_LIBS is empty

LD = mpif90
LDFLAGS= -g -pthread
LD_LIBS= 

# External Libraries (if any) : blas, lapack, 

[Pw_forum] cp.x: atomic positions and cell parameters in restart file

2016-12-28 Thread Zeeshan Ahmad
Hi QE users,

I was wondering why is it required to have ATOMIC_POSITIONS and CELL_PARAMETERS 
in cp.x input file using restart. I have a cp.x input file with 
restart=‘reset_counters' which shows segmentation fault if I don’t include 
atomic positions and cell parameters. They are read from the restart file 
anyway and this also shows up in output file: positions will be re-read from 
restart file.

Thanks,
Zeeshan
--
Zeeshan Ahmad
PhD student, Mechanical Engineering
Carnegie Mellon University
email: azees...@andrew.cmu.edu <mailto:azees...@andrew.cmu.edu>
http://www.andrew.cmu.edu/~azeeshan/ <http://www.andrew.cmu.edu/user/azeeshan/>










___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] NVT simulations using cp.x

2016-10-15 Thread Zeeshan Ahmad
Dear all,

I intend to use cp.x to perform constant temperature simulations. The document 
cp_user_guide.pdf, recommends adding a random displacement to ions before 
switching on the thermostat and not to increase the temperature too much. In 
order to achieve a high temperature ~ 800-1000 K, the random displacement must 
be increased in the 1st step. Based on my microcanonical simulations, I found 
out that adding a very high random displacement of ~0.9-1 a.u. gives the 
required average temperature for my system. So, my question is: is it okay to 
first run a micro canonical simulation with such a high random displacement to 
achieve a temperature close to the required (high) temperature and then turning 
on the thermostat?

Thank you,
--
Zeeshan Ahmad
PhD student, Mechanical Engineering
Carnegie Mellon University
email: azees...@andrew.cmu.edu <mailto:azees...@andrew.cmu.edu>
http://www.andrew.cmu.edu/user/azeeshan/ 
<http://www.andrew.cmu.edu/user/azeeshan/>






___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum