[QE-users] PAW compensation charge density

2020-12-22 Thread Jingyang Wang
Dear QE users and developers,

When we apply plotnum=0 in pp.x for PAW pseudopotential calculations, does
the output density already include the atomic compensation charge density
as well? If not, is there an option to get it?

Thanks,

Jingyang Wang
Postdoctoral Fellow
Stanford University
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] Is it possible to self-define nuclear charge for a pseudopotential?

2018-09-20 Thread Jingyang Wang
Dear users,

I would like to implement the VCA method for charge compensation for a
charged defect in a slab (as in Phys. Rev. Lett. 111, 045502 (2013)). To do
this, I need to modify the nuclear charge of some host atoms. I noticed
there is a keyword "zed" in pseudopotential input file which specifies the
nuclear charge for an element. Can I change the value of "zed" for the
pseudopotential of a specific element to achieve this purpose? Thanks!

Best regards,
Jingyang

Jingyang Wang
Ph.D. candidate
School of Applied and Engineering Physics
Cornell University
___
users mailing list
users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] Updated virtual.x to virtual_v2.x

2018-03-26 Thread Jingyang Wang
Dear QE developers,

Since I realized some time ago that the problem of virtual.x not being able
to read the UPF v2 format has been there for a long time, and nobody seemed
to have the time or need to fix it, I decided to do it myself. The updated
version is named "virtual_v2.x"; as the name suggests, it properly
generates a virtual crystal pseudopotential in UPF v2 format from two UPF
v2 pseudopotentials. The nice feature about this code is that, instead of
using its own independent read and write routine in /upftools, it uses the
standard modules "upf_module" and "write_upf_v2" in /Modules. I have
compiled the code with QE v5.2.0 and successfully generated the correct
pseudopotential file, although there seems to be some kind of problem for
later versions of QE (not sure what it is). I'm wondering if there is a way
to incorporate this code to the latest QE depository? Thank you very much!

Best regards,
Jingyang Wang


Jingyang Wang
Ph.D. candidate
School of Applied and Engineering Physics
Cornell University
___
users mailing list
users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[Pw_forum] Has virtual.x been updated to read upf v2?

2017-09-04 Thread Jingyang Wang
Dear QE users,

I am trying to use upftools/virtual.x to generate a VCA pseudopotential
from In and Ga pseudopotentials in pslibrary. However I have encountered
the same error that many people in the forum have seen before, which is
that virtual.x cannot properly read upf v2 format. So just wondering if
anyone happens to have a updated version of virtual.x that is compatible
with upf v2? If so, I would greatly appreciate it.

Thanks,
Jingyang

Jingyang 'Mark' Wang
Ph.D. candidate
School of Applied and Engineering Physics
Cornell University
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] Is cp.x suitable for metal-semiconductor interface?

2017-06-25 Thread Jingyang Wang
Dear QE users,

I have read somewhere that the Car-Parrinello method is not suitable for
metals due to vanishing band gap. I'm wondering if it could be useful for a
metal-semiconductor interface configuration? If not, how about the
ensemble-DFT approach in cp.x?

Best regards,

Jingyang Wang
Ph.D. candidate
School of Applied and Engineering Physics
Cornell University
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] ecutrho <= ecutwfc?!?

2016-05-23 Thread Jingyang Wang
Dear Paolo,

Thanks for the suggestion! I have just run the job on 1 processor, and it
apparently have passed the initial stage of cutoff self-consistency
checking. (see attached). Since the smallest computation unit (node) has 16
processors on Stampede, I ran the job on a different cluster with QE 5.1.
On Stampede, I have run the same job on 1 node (16 procs), and the output
passed the initial stage as well before it ends.

Best,
Jingyang

-- 
Jingyang 'Mark' Wang
School of Applied and Engineering Physics
Cornell University


Si_In_+1.out
Description: Binary data
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] (no subject)

2016-05-23 Thread Jingyang Wang
Dear Lorenzo, Paolo and others,

Thank you all for your very helpful suggestions! I have tried adding .d0
after the values for ecutwfc and ecutrho, but the same error persists. The
compilers for QE 5.4.0 on Stampede are ifort and icc 15.0.2. I compiled QE
5.1.2 using the same compilers, but the same error occured again. This
should confirm that it is a bug related to the compiler. I will try QE
5.0.3 since it is compiled by ifort and icc 13.0.2 on Stampede.

-- 
Jingyang 'Mark' Wang
School of Applied and Engineering Physics
Cornell University
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] ecutrho <= ecutwfc?!?

2016-05-22 Thread Jingyang Wang
Dear QE users,

Recently I have run a "relax" job on Stampede using QE v 5.4.0, and got
this very brief error message:

 Error in routine set_cutoff (1):
 ecutrho <= ecutwfc?!?

By checking the source code, I realized that it means the ecutrho is set to
be smaller than (or equal to) ecutwfc, which is not allowed. However, this
is not the case for my calculation. My input file is below:


&CONTROL
calculation='relax'
restart_mode='from_scratch'
wf_collect=.true.
prefix='Si_In_+1'
tstress = .true.
tprnfor = .true.
outdir = '.'
disk_io = 'low',
pseudo_dir = '/scratch/03598/exengt/PP_PAW'
nstep = 300
/
&SYSTEM
ibrav=0,
celldm(1)=33.24570d0,
nat=216,
ntyp=4,
tot_charge=+1,
ecutwfc=70,
ecutrho=280,
occupations='smearing',
smearing='m-v',
degauss=0.01d0
/
&ELECTRONS
diagonalization='david'
mixing_mode = 'plain'
mixing_beta = 0.1d0
conv_thr = 1.0d-8
/
&IONS
ion_dynamics = 'bfgs',
trust_radius_max = 1.2D0
/

ATOMIC_SPECIES
 In  114.81800   In.pbesol-dn-kjpaw_psl.1.0.0.UPF
 Ga   69.72300   Ga.pbesol-dnl-kjpaw_psl.1.0.0.UPF
 As   74.92160   As.pbesol-n-kjpaw_psl.1.0.0.UPF
 Si   28.08550   Si.pbesol-nl-kjpaw_psl.1.0.0.UPF

ATOMIC_POSITIONS {alat}
  In   0.056667d0   0.297257d0   0.2932096648d0
  In   0.39d0   0.297257d0   0.2932096648d0
  In   0.72d0   0.297257d0   0.2932096648d0
  In   0.056667d0   0.630590d0   0.2932096648d0
  In   0.39d0   0.630590d0   0.2932096648d0
  In   0.72d0   0.630590d0   0.2932096648d0
  In   0.056667d0   0.963923d0   0.2932096648d0
  In   0.39d0   0.963923d0   0.2932096648d0
  In   0.72d0   0.963923d0   0.2932096648d0
  In   0.056667d0   0.297257d0   0.6291087434d0
  In   0.39d0   0.297257d0   0.6291087434d0
  In   0.72d0   0.297257d0   0.6291087434d0
  In   0.056667d0   0.630590d0   0.6291087434d0
  Si   0.3903252389d0   0.6302748236d0   0.6260076799d0
  In   0.72d0   0.630590d0   0.6291087434d0
  In   0.056667d0   0.963923d0   0.6291087434d0
  In   0.39d0   0.963923d0   0.6291087434d0
  In   0.72d0   0.963923d0   0.6291087434d0
  In   0.056667d0   0.297257d0   0.9650078221d0
  In   0.39d0   0.297257d0   0.9650078221d0
  In   0.72d0   0.297257d0   0.9650078221d0
  In   0.056667d0   0.630590d0   0.9650078221d0
  In   0.39d0   0.630590d0   0.9650078221d0
  In   0.72d0   0.630590d0   0.9650078221d0
  In   0.056667d0   0.963923d0   0.9650078221d0
  In   0.39d0   0.963923d0   0.9650078221d0
  In   0.72d0   0.963923d0   0.9650078221d0
  In   0.22d0   0.130590d0   0.2932096648d0
  In   0.556667d0   0.130590d0   0.2932096648d0
  In   0.89d0   0.130590d0   0.2932096648d0
  In   0.22d0   0.463923d0   0.2932096648d0
  In   0.556667d0   0.463923d0   0.2932096648d0
  In   0.89d0   0.463923d0   0.2932096648d0
  In   0.22d0   0.797257d0   0.2932096648d0
  In   0.556667d0   0.797257d0   0.2932096648d0
  In   0.89d0   0.797257d0   0.2932096648d0
  In   0.22d0   0.130590d0   0.6291087434d0
  In   0.556667d0   0.130590d0   0.6291087434d0
  In   0.89d0   0.130590d0   0.6291087434d0
  In   0.2246378474d0   0.4649596237d0   0.6260681613d0
  In   0.5545611959d0   0.4654446970d0   0.6319149434d0
  In   0.89d0   0.463923d0   0.6291087434d0
  In   0.2213078190d0   0.7973758071d0   0.6259265193d0
  In   0.5550308100d0   0.7961995553d0   0.6267325126d0
  In   0.89d0   0.797257d0   0.6291087434d0
  In   0.22d0   0.130590d0   0.9650078221d0
  In   0.556667d0   0.130590d0   0.9650078221d0
  In   0.89d0   0.130590d0   0.9650078221d0
  In   0.22d0   0.463923d0   0.9650078221d0
  In   0.556667d0   0.463923d0   0.9650078221d0
  In   0.89d0   0.463923d0   0.9650078221d0
  In   0.22d0   0.797257d0   0.9650078221d0
  In   0.556667d0   0.797257d0   0.9650078221d0
  In   0.89d0   0.797257d0   0.9650078221d0
  Ga   0.22d0   0.297247d0   0.1252534074d0
  Ga   0.556667d0   0.297247d0   0.1252534074d0
  Ga   0.89d0   0.297247d0   0.1252534074d0
  Ga   0.22d0   0.630580d0   0.1252534074d0
  Ga   0.556667d0   0.630580d0   0.1252534074d0
  Ga   0.89d0   0.630580d0   0.1252534074d0
  Ga   0.22d0   0.963913d0   0.1252534074d0
  Ga   0.556667d0   0.963913d0   0.1252534074d0
  Ga   0.89d0   0.963913d0   0.1252534074d0
  Ga   0.22d0   0.297247d0   0.4611524861d0
  Ga   0.556667d0   0.297247d0   0.4611524861d0
  Ga   0.89d0   0.297247d0   0.4611524861d0
  Ga   0.2203414406d0   0.6271928747d0   0.4614998444d0
  Ga   0.5567431690d0   0.6337415999d0   0.4613189160d0
  Ga   0.89d0   0.6305

[Pw_forum] Messy output from parallel execution after a debug

2015-05-19 Thread Jingyang Wang
Dear QE users,



Recently I attempted to restart a vc-relax calculation on QE v5.1, and
received the error message:


*"Subspace diagonalization in iterative solution of the eigenvalue problem:*

* a serial algorithm will be used*



* Error in routine invmat (1):*

* error in DGETRF*



* stopping ...  "*


 According to the forum, this is due to a bug in QE v5.1:

http://qe-forge.org/pipermail/pw_forum/2014-September/105119.html



So I replaced the old PW/src/input.f90 with the one provided by Dr.
Paulatto, and recompiled the program. Now, another problem occurred: any
job run on multiple parallel processors gave a very messy output, while
those run on a single processor worked fine. One particular example is:
(the input is PW/examples/example01/si.scf.david.in; see attached)



*@@@Program PWSCF v.5.1 starts on 19May2015 at 10: 8:23 *



* This program is part of the open-source Quantum ESPRESSO suite*

* for quantum simulation of materials; please cite*

* "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);*

*  URL http://www.quantum-espresso.org
", *

* in publications or presentations arising from this work. More details
at*

* http://www.quantum-espresso.org/quote
*



* Serial version*

* Waiting for input...*

* Waiting for input...*

* Reading input from standard input*

* Reading input from standard input*

* Reading input from standard input*

* Reading input from standard input*

* Message from routine read_cards :*

* DEPRECATED: no units specified in ATOMIC_POSITIONS card*

* Message from routine read_cards :*

* ATOMIC_POSITIONS: units set to alat*

* Message from routine read_cards :*

* DEPRECATED: no units specified in ATOMIC_POSITIONS card*

* Message from routine read_cards :*

* ATOMIC_POSITIONS: units set to alat*

* Message from routine read_cards :*

* DEPRECATED: no units specified in ATOMIC_POSITIONS card*

* Message from routine read_cards :*

* ATOMIC_POSITIONS: units set to alat*

* Message from routine read_cards :*

* DEPRECATED: no units specified in ATOMIC_POSITIONS card*

* Message from routine read_cards :*

* ATOMIC_POSITIONS: units set to alat*

* …*


(full output see attached)



Also, now the program cannot restart other previously cleanly stopped jobs
(using .EXIT file) as well. For example, near the end of Si_In_+1_3rd.out:



*"Error in routine davcio (10):*

*error while reading from file
"/fs/home/jw598/III-V/CW/128_atoms/Si_In/p1/./Si_In_+1.wfc"*



*stopping ... "*



(full input and output see attached)



I notice that the output says QE is a serial version, but it was a parallel
version before the recompilation, and all my parallel jobs have run
successfully before. So if anyone happens to know how this strange behavior
might have emerged and could suggest any advice, I would greatly appreciate
it. Thank you very much!



Best regards,
-- 
Jingyang 'Mark' Wang
School of Applied and Engineering Physics
Cornell University


si.scf.david.nbs
Description: Binary data


si.scf.david.out
Description: Binary data


Si_In_+1.nbs
Description: Binary data


Si_In_+1_3rd.out
Description: Binary data
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum