Re: [QE-users] Supercell relaxation

2024-05-13 Thread Daniel Rothchild via users
Hi Vishva,

As the CRASH file says, the first and second atoms in your input file
differ by exactly one lattice vector. In fact, there are several such pairs
of atoms in your input file that differ by exactly one lattice vector -- in
other words, those pairs of atoms are directly on top of each other. How
are you generating your input structure? If you're familiar with Python,
you might try using ase's structure building tools. Something like:

from ase.build import bulk, make_supercell
atoms = make_supercell(bulk("Fe"), [[2, 0, 0], [0, 2, 0], [0, 0, 2]])

This won't make the simple cubic unit cell in your original input file, but
it will make a 2x2x2 BCC supercell of Fe that you can use to write out the
cell and positions for use with quantum espresso.

Daniel Rothchild
Unaffiliated (recent UC Berkeley PhD grad)

On Mon, May 13, 2024 at 5:58 AM VISHVA JEET ANAND via users <
users@lists.quantum-espresso.org> wrote:

> Thank you for your answer.  When I put the atomic position in angstrom, I
> got a crash report that is attached here.
>
> On Mon, May 13, 2024 at 4:38 PM Giovanni Cantele <
> giovanni.cant...@spin.cnr.it> wrote:
>
>> Dear Vishva,
>>
>> before running any calculation it is a good practice to visualize your
>> input structure, because many times convergence issues derive
>> from errors in the input geometry.
>>
>> If you do so with yours, you see atoms at extremely small distances from
>> each other, which prevents pw.x from reasonably converging
>> in an acceptable number of steps.
>>
>> The second step is to understand why the structure is wrong. In your case
>> my guess is that you tell pw.x that you're providing positions in alat
>> units,
>> whereas those units are Angstrom.
>>
>> The number of atoms depends on what you want to do. If you what to use a
>> simple cubic unit cell (as in your input file) and the bcc unit cell is
>> composed by N atoms,
>> then the 1x1x1 CONVENTIONAL unit cell will contain 2N atoms. If, on top
>> of that, you want also build a 2x2x2 unit cell, than that number has to be
>> multiplied by 2x2x2.
>>
>> Giovanmi
>> --
>>
>> Giovanni Cantele, PhD
>> CNR-SPIN
>> c/o Dipartimento di Fisica
>> Universita' di Napoli "Federico II"
>> Complesso Universitario M. S. Angelo - Ed. 6
>> Via Cintia, I-80126, Napoli, Italy
>> e-mail: giovanni.cant...@spin.cnr.it 
>> Phone: +39 081 676910
>> Skype contact: giocan74
>>
>> ResearcherID: http://www.researcherid.com/rid/A-1951-2009
>> Web page: https://sites.google.com/view/giovanni-cantele/home
>>
>>
>> Il giorno lun 13 mag 2024 alle ore 12:08 VISHVA JEET ANAND via users <
>> users@lists.quantum-espresso.org> ha scritto:
>>
>>> Dear Users
>>> I try to run Fe (bcc) structure 2x2x2 supercell for relaxation
>>> calculation but scf does not converge in 1000 iterations. Secondally how
>>> many no. of atoms in Fe (bcc) structure. Here I attached my input file.
>>>
>>> --
>>> With Regards
>>> Vishva Jeet Anand
>>> Research Scholar
>>> Department of Chemistry
>>>
>>> ___
>>> The Quantum ESPRESSO community stands by the Ukrainian
>>> people and expresses its concerns about the devastating
>>> effects that the Russian military offensive has on their
>>> country and on the free and peaceful scientific, cultural,
>>> and economic cooperation amongst peoples
>>> ___
>>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>>> users mailing list users@lists.quantum-espresso.org
>>> https://lists.quantum-espresso.org/mailman/listinfo/users
>>
>>
>
> --
> With Regards
> Vishva Jeet Anand
> Research Scholar
> Department of Chemistry
>
> ___
> The Quantum ESPRESSO community stands by the Ukrainian
> people and expresses its concerns about the devastating
> effects that the Russian military offensive has on their
> country and on the free and peaceful scientific, cultural,
> and economic cooperation amongst peoples
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Optimal pw command line for large systems and only Gamma point

2024-05-13 Thread Giuseppe Mattioli


Ciao Nicola
You're right, I've mixed two different things too much with a  
misleading result. The first information was "use Gaussian smearing  
because in my experience makes scf more stable". The second was "if  
you use Gaussian smearing and scf_must_converge=.false., then you may  
reduce smearing to lower values that avoid the smearing of too much  
charge density across the semiconductor band gap (if there is any in  
such nanoclusters...), with a partial occupation of orbitals that  
should be empty".

Thanks for the clarification, I think it will be useful to Antonio.
Best
Giuseppe

Quoting Nicola Marzari :


On 13/05/2024 17:26, Giuseppe Mattioli wrote:


    occupations= 'smearing'
    smearing= 'cold'
    degauss= 0.05 ! I know it's quite large, but necessary to  
stabilize the SCF at this preliminary stage (no geometry step done  
yet)

    mixing_beta= 0.4


If you want to stabilize the scf it is better to use a Gaussian  
smearing and to reduce degauss (to 0.01) and mixing beta (to 0.1 or  
even 0.05~0.01). In the case of a relax calculation with a  
difficult first step, try to use scf_must_converge=.false. and a  
reasonable electron_maxstep (30~50). It often helps when the scf is  
not completely going astray.



Ciao Giuseppe, I would agree that in a semiconductor it might be  
more natural to use Gaussian (although even for cold things are now  
sorted out -  
https://journals.aps.org/prb/abstract/10.1103/PhysRevB.107.195122);  
but I wonder why reducing the smearing would help convergence.


To me, the smaller the smearing the more you can be affected by  
level-crossing instabiities?


nicola




--
Prof Nicola Marzari, Chair of Theory and Simulation of Materials, EPFL
Director, National Centre for Competence in Research NCCR MARVEL, SNSF
Head, Laboratory for Materials Simulations, Paul Scherrer Institut
Contact info and websites at http://theossrv1.epfl.ch/Main/Contact




GIUSEPPE MATTIOLI
CNR - ISTITUTO DI STRUTTURA DELLA MATERIA
Via Salaria Km 29,300 - C.P. 10
I-00015 - Monterotondo Scalo (RM)
Mob (*preferred*) +39 373 7305625
Tel + 39 06 90672342 - Fax +39 06 90672316
E-mail: 

___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Optimal pw command line for large systems and only Gamma point

2024-05-13 Thread Nicola Marzari via users

On 13/05/2024 17:26, Giuseppe Mattioli wrote:


    occupations= 'smearing'
    smearing= 'cold'
    degauss= 0.05 ! I know it's quite large, but necessary to 
stabilize the SCF at this preliminary stage (no geometry step done yet)

    mixing_beta= 0.4


If you want to stabilize the scf it is better to use a Gaussian smearing 
and to reduce degauss (to 0.01) and mixing beta (to 0.1 or even 
0.05~0.01). In the case of a relax calculation with a difficult first 
step, try to use scf_must_converge=.false. and a reasonable 
electron_maxstep (30~50). It often helps when the scf is not completely 
going astray.



Ciao Giuseppe, I would agree that in a semiconductor it might be more 
natural to use Gaussian (although even for cold things are now sorted 
out - 
https://journals.aps.org/prb/abstract/10.1103/PhysRevB.107.195122); but 
I wonder why reducing the smearing would help convergence.


To me, the smaller the smearing the more you can be affected by 
level-crossing instabiities?


nicola




--
Prof Nicola Marzari, Chair of Theory and Simulation of Materials, EPFL
Director, National Centre for Competence in Research NCCR MARVEL, SNSF
Head, Laboratory for Materials Simulations, Paul Scherrer Institut
Contact info and websites at http://theossrv1.epfl.ch/Main/Contact

___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Optimal pw command line for large systems and only Gamma point

2024-05-13 Thread Giuseppe Mattioli


Dear Antonio


The actual time spent per scf cycle is about 33 minutes.


This is not so bad. :-)


The relevant parameters in the input file are the following:


Some relevant parameters are not shown.


    input_dft= 'pz'
    ecutwfc= 25


Which kind of pseudopotential? You didn't set ecutrho...
What about ibrav and celldm?
I suppose that you really want to perform LDA calculations for some reason.


    occupations= 'smearing'
    smearing= 'cold'
    degauss= 0.05 ! I know it's quite large, but necessary to  
stabilize the SCF at this preliminary stage (no geometry step done  
yet)

    mixing_beta= 0.4


If you want to stabilize the scf it is better to use a Gaussian  
smearing and to reduce degauss (to 0.01) and mixing beta (to 0.1 or  
even 0.05~0.01). In the case of a relax calculation with a difficult  
first step, try to use scf_must_converge=.false. and a reasonable  
electron_maxstep (30~50). It often helps when the scf is not  
completely going astray.



    nbnd= 2010

    diagonalization= 'ppcg'


davidson should be faster.


And, if possible, also to reduce the number of nodes?



 Estimated total dynamical RAM >    1441.34 GB


you may try with 7-8 nodes according to this estimate.

HTH
Giuseppe

Quoting Antonio Cammarata via users :

I did some tests. For 1000 Si atoms, I use 2010 bands because I need  
to get the band gap value; moreover, being a cluster, the surface  
states of the truncated bonds might close the gap, especially at the  
first steps of the geometry optimization, so it's better I use few  
empty bands. I managed to run the calculation by using 10 nodes and  
a max of 40 cores per node. My question now is: can you suggest me  
optimal command line options and/or input settings to speed up the  
calculation? And, if possible, also to reduce the number of nodes?  
The relevant parameters in the input file are the following:


    input_dft= 'pz'
    ecutwfc= 25
    occupations= 'smearing'
    smearing= 'cold'
    degauss= 0.05 ! I know it's quite large, but necessary to  
stabilize the SCF at this preliminary stage (no geometry step done  
yet)

    nbnd= 2010

    diagonalization= 'ppcg'
    mixing_mode= 'plain'
    mixing_beta= 0.4

The actual time spent per scf cycle is about 33 minutes. I use QE v.  
7.3 compiled with openmpi and scalapack. I have access to the intel  
compilers too but I did some tests and the difference is just tens  
of seconds. I have only the Gamma point; please, here you have some  
info about the grid and the estimated RAM usage:


 Dense  grid: 24616397 G-vectors FFT dimensions: ( 375, 375, 375)
 Dynamical RAM for wfc: 235.91 MB
 Dynamical RAM for wfc (w. buffer): 235.91 MB
 Dynamical RAM for   str. fact:   0.94 MB
 Dynamical RAM for   local pot:   0.00 MB
 Dynamical RAM for  nlocal pot:    2112.67 MB
 Dynamical RAM for    qrad:   0.80 MB
 Dynamical RAM for  rho,v,vnew:   6.04 MB
 Dynamical RAM for   rhoin:   2.01 MB
 Dynamical RAM for    rho*nmix:  15.03 MB
 Dynamical RAM for   G-vectors:   3.99 MB
 Dynamical RAM for  h,s,v(r/c):   0.46 MB
 Dynamical RAM for  : 552.06 MB
 Dynamical RAM for  wfcinit/wfcrot:    1305.21 MB
 Estimated static dynamical RAM per process >   2.31 GB
 Estimated max dynamical RAM per process >   3.60 GB
 Estimated total dynamical RAM >    1441.34 GB

Thanks a lot in advance for your kind help.

All the best

Antonio


On 10. 05. 24 12:01, Paolo Giannozzi wrote:

On 5/10/24 08:58, Antonio Cammarata via users wrote:


pw.x -nk 1 -nt 1 -nb 1 -nd 768 -inp qe.in > qe.out


too many processors for linear-algebra parallelization. 1000 Si  
atoms = 2000 bands (assuming an insulator with no spin  
polarization). Use a few tens of processors at most



"some processors have no G-vectors for symmetrization".


which sounds strange to me: with the Gamma point symmetrization is  
not even needed




  Dense  grid: 30754065 G-vectors FFT dimensions: ( 400, 400, 400)


This is what a 256-atom Si supercell with 30 Ry cutoff yields:

 Dense  grid:   825897 G-vectors FFT dimensions: ( 162, 162, 162)

I guess you may reduce the size of your supercell

Paolo


  Dynamical RAM for wfc: 153.50 MB
  Dynamical RAM for wfc (w. buffer): 153.50 MB
  Dynamical RAM for   str. fact:   0.61 MB
  Dynamical RAM for   local pot:   0.00 MB
  Dynamical RAM for  nlocal pot:    1374.66 MB
  Dynamical RAM for    qrad:   0.87 MB
  Dynamical RAM for  rho,v,vnew:   5.50 MB
  Dynamical RAM for   rhoin:   1.83 MB
  Dynamical RAM for    rho*nmix:   9.78 MB
  Dynamical RAM for   G-vectors:   2.60 MB
  Dynamical RAM for  h,s,v(r/c):   0.25 MB
  Dy

Re: [QE-users] Optimal pw command line for large systems and only Gamma point

2024-05-13 Thread Antonio Cammarata via users
I did some tests. For 1000 Si atoms, I use 2010 bands because I need to 
get the band gap value; moreover, being a cluster, the surface states of 
the truncated bonds might close the gap, especially at the first steps 
of the geometry optimization, so it's better I use few empty bands. I 
managed to run the calculation by using 10 nodes and a max of 40 cores 
per node. My question now is: can you suggest me optimal command line 
options and/or input settings to speed up the calculation? And, if 
possible, also to reduce the number of nodes? The relevant parameters in 
the input file are the following:


    input_dft= 'pz'
    ecutwfc= 25
    occupations= 'smearing'
    smearing= 'cold'
    degauss= 0.05 ! I know it's quite large, but necessary to stabilize 
the SCF at this preliminary stage (no geometry step done yet)

    nbnd= 2010

    diagonalization= 'ppcg'
    mixing_mode= 'plain'
    mixing_beta= 0.4

The actual time spent per scf cycle is about 33 minutes. I use QE v. 7.3 
compiled with openmpi and scalapack. I have access to the intel 
compilers too but I did some tests and the difference is just tens of 
seconds. I have only the Gamma point; please, here you have some info 
about the grid and the estimated RAM usage:


 Dense  grid: 24616397 G-vectors FFT dimensions: ( 375, 375, 375)
 Dynamical RAM for wfc: 235.91 MB
 Dynamical RAM for wfc (w. buffer): 235.91 MB
 Dynamical RAM for   str. fact:   0.94 MB
 Dynamical RAM for   local pot:   0.00 MB
 Dynamical RAM for  nlocal pot:    2112.67 MB
 Dynamical RAM for    qrad:   0.80 MB
 Dynamical RAM for  rho,v,vnew:   6.04 MB
 Dynamical RAM for   rhoin:   2.01 MB
 Dynamical RAM for    rho*nmix:  15.03 MB
 Dynamical RAM for   G-vectors:   3.99 MB
 Dynamical RAM for  h,s,v(r/c):   0.46 MB
 Dynamical RAM for  : 552.06 MB
 Dynamical RAM for  wfcinit/wfcrot:    1305.21 MB
 Estimated static dynamical RAM per process >   2.31 GB
 Estimated max dynamical RAM per process >   3.60 GB
 Estimated total dynamical RAM >    1441.34 GB

Thanks a lot in advance for your kind help.

All the best

Antonio


On 10. 05. 24 12:01, Paolo Giannozzi wrote:

On 5/10/24 08:58, Antonio Cammarata via users wrote:


pw.x -nk 1 -nt 1 -nb 1 -nd 768 -inp qe.in > qe.out


too many processors for linear-algebra parallelization. 1000 Si atoms 
= 2000 bands (assuming an insulator with no spin polarization). Use a 
few tens of processors at most


"some processors have no G-vectors for symmetrization". 


which sounds strange to me: with the Gamma point symmetrization is not 
even needed




  Dense  grid: 30754065 G-vectors FFT dimensions: ( 400, 400, 400)


This is what a 256-atom Si supercell with 30 Ry cutoff yields:

 Dense  grid:   825897 G-vectors FFT dimensions: ( 162, 162, 162)

I guess you may reduce the size of your supercell

Paolo


  Dynamical RAM for wfc: 153.50 MB
  Dynamical RAM for wfc (w. buffer): 153.50 MB
  Dynamical RAM for   str. fact:   0.61 MB
  Dynamical RAM for   local pot:   0.00 MB
  Dynamical RAM for  nlocal pot:    1374.66 MB
  Dynamical RAM for    qrad:   0.87 MB
  Dynamical RAM for  rho,v,vnew:   5.50 MB
  Dynamical RAM for   rhoin:   1.83 MB
  Dynamical RAM for    rho*nmix:   9.78 MB
  Dynamical RAM for   G-vectors:   2.60 MB
  Dynamical RAM for  h,s,v(r/c):   0.25 MB
  Dynamical RAM for  : 552.06 MB
  Dynamical RAM for  wfcinit/wfcrot: 977.20 MB
  Estimated static dynamical RAM per process >   1.51 GB
  Estimated max dynamical RAM per process >   2.47 GB
  Estimated total dynamical RAM >    1900.41 GB

I managed to run the simulation with 512 atoms, cg diagonalization 
and 3 nodes on the same machine with command line


pw.x -nk 1 -nt 1 -nd 484 -inp qe.in > qe.out

Please, do you have any suggestion on how to set optimal 
parallelization parameters to avoid the memory issue and run the 
calculation? I am also planning to run simulations on nanoclusters 
with more than 1000 atoms.


Thanks a lot in advance for your kind help.

Antonio





--
___
Antonio Cammarata, PhD in Physics
Associate Professor in Applied Physics
Advanced Materials Group
Department of Control Engineering - KN:G-204
Faculty of Electrical Engineering
Czech Technical University in Prague
Karlovo Náměstí, 13
121 35, Prague 2, Czech Republic
Phone: +420 224 35 5711
Fax:   +420 224 91 8646
ORCID: orcid.org/-0002-5691-0682
WoS ResearcherID: A-4883-2014

___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns ab

Re: [QE-users] Supercell relaxation

2024-05-13 Thread VISHVA JEET ANAND via users
Thank you for your answer.  When I put the atomic position in angstrom, I
got a crash report that is attached here.

On Mon, May 13, 2024 at 4:38 PM Giovanni Cantele <
giovanni.cant...@spin.cnr.it> wrote:

> Dear Vishva,
>
> before running any calculation it is a good practice to visualize your
> input structure, because many times convergence issues derive
> from errors in the input geometry.
>
> If you do so with yours, you see atoms at extremely small distances from
> each other, which prevents pw.x from reasonably converging
> in an acceptable number of steps.
>
> The second step is to understand why the structure is wrong. In your case
> my guess is that you tell pw.x that you're providing positions in alat
> units,
> whereas those units are Angstrom.
>
> The number of atoms depends on what you want to do. If you what to use a
> simple cubic unit cell (as in your input file) and the bcc unit cell is
> composed by N atoms,
> then the 1x1x1 CONVENTIONAL unit cell will contain 2N atoms. If, on top of
> that, you want also build a 2x2x2 unit cell, than that number has to be
> multiplied by 2x2x2.
>
> Giovanmi
> --
>
> Giovanni Cantele, PhD
> CNR-SPIN
> c/o Dipartimento di Fisica
> Universita' di Napoli "Federico II"
> Complesso Universitario M. S. Angelo - Ed. 6
> Via Cintia, I-80126, Napoli, Italy
> e-mail: giovanni.cant...@spin.cnr.it 
> Phone: +39 081 676910
> Skype contact: giocan74
>
> ResearcherID: http://www.researcherid.com/rid/A-1951-2009
> Web page: https://sites.google.com/view/giovanni-cantele/home
>
>
> Il giorno lun 13 mag 2024 alle ore 12:08 VISHVA JEET ANAND via users <
> users@lists.quantum-espresso.org> ha scritto:
>
>> Dear Users
>> I try to run Fe (bcc) structure 2x2x2 supercell for relaxation
>> calculation but scf does not converge in 1000 iterations. Secondally how
>> many no. of atoms in Fe (bcc) structure. Here I attached my input file.
>>
>> --
>> With Regards
>> Vishva Jeet Anand
>> Research Scholar
>> Department of Chemistry
>>
>> ___
>> The Quantum ESPRESSO community stands by the Ukrainian
>> people and expresses its concerns about the devastating
>> effects that the Russian military offensive has on their
>> country and on the free and peaceful scientific, cultural,
>> and economic cooperation amongst peoples
>> ___
>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>> users mailing list users@lists.quantum-espresso.org
>> https://lists.quantum-espresso.org/mailman/listinfo/users
>
>

-- 
With Regards
Vishva Jeet Anand
Research Scholar
Department of Chemistry


CRASH
Description: Binary data


fe_relax.out
Description: chemical/gulp
___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Supercell relaxation

2024-05-13 Thread Giovanni Cantele
Dear Vishva,

before running any calculation it is a good practice to visualize your
input structure, because many times convergence issues derive
from errors in the input geometry.

If you do so with yours, you see atoms at extremely small distances from
each other, which prevents pw.x from reasonably converging
in an acceptable number of steps.

The second step is to understand why the structure is wrong. In your case
my guess is that you tell pw.x that you're providing positions in alat
units,
whereas those units are Angstrom.

The number of atoms depends on what you want to do. If you what to use a
simple cubic unit cell (as in your input file) and the bcc unit cell is
composed by N atoms,
then the 1x1x1 CONVENTIONAL unit cell will contain 2N atoms. If, on top of
that, you want also build a 2x2x2 unit cell, than that number has to be
multiplied by 2x2x2.

Giovanmi
-- 

Giovanni Cantele, PhD
CNR-SPIN
c/o Dipartimento di Fisica
Universita' di Napoli "Federico II"
Complesso Universitario M. S. Angelo - Ed. 6
Via Cintia, I-80126, Napoli, Italy
e-mail: giovanni.cant...@spin.cnr.it 
Phone: +39 081 676910
Skype contact: giocan74

ResearcherID: http://www.researcherid.com/rid/A-1951-2009
Web page: https://sites.google.com/view/giovanni-cantele/home


Il giorno lun 13 mag 2024 alle ore 12:08 VISHVA JEET ANAND via users <
users@lists.quantum-espresso.org> ha scritto:

> Dear Users
> I try to run Fe (bcc) structure 2x2x2 supercell for relaxation
> calculation but scf does not converge in 1000 iterations. Secondally how
> many no. of atoms in Fe (bcc) structure. Here I attached my input file.
>
> --
> With Regards
> Vishva Jeet Anand
> Research Scholar
> Department of Chemistry
>
> ___
> The Quantum ESPRESSO community stands by the Ukrainian
> people and expresses its concerns about the devastating
> effects that the Russian military offensive has on their
> country and on the free and peaceful scientific, cultural,
> and economic cooperation amongst peoples
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] Supercell relaxation

2024-05-13 Thread VISHVA JEET ANAND via users
Dear Users
I try to run Fe (bcc) structure 2x2x2 supercell for relaxation
calculation but scf does not converge in 1000 iterations. Secondally how
many no. of atoms in Fe (bcc) structure. Here I attached my input file.

-- 
With Regards
Vishva Jeet Anand
Research Scholar
Department of Chemistry


fe_relax.in
Description: Binary data
___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] [SPAM] QE Born effective charges and Dielectric Constant

2024-05-13 Thread 孙昊冉
Dear professors and experts:




When i use pw.x and ph.x to calculate the Born effective charges and Dielectric 
Constant of some materials.there are some problems, it can't output Born 
effective charges and Dielectric Constant in the ph.out. 




the output in the ph.out file like this: 

 

 Alpha used in Ewald sum =   2.8000

 PHONON   :  1h34m CPU  1h35m WALL





 Representation #   1 mode #   1


 Self-consistent Calculation


 Pert. #  1: Fermi energy shift (Ry) =-1.0412E-02-4.0390E-28


 iter #   1 total cpu time :  5943.6 secs   av.it.:   6.2

 thresh= 1.000E-02 alpha_mix =  0.700 |ddv_scf|^2 =  7.812E-08

 Pert. #  1: Fermi energy shift (Ry) =-2.3676E-02 1.0097E-28


 iter #   2 total cpu time :  6205.5 secs   av.it.:  18.7

 thresh= 2.795E-05 alpha_mix =  0.700 |ddv_scf|^2 =  1.017E-07

 Pert. #  1: Fermi energy shift (Ry) =-3.2299E-02-5.0487E-29


 iter #   3 total cpu time :  6458.7 secs   av.it.:  17.8


 thresh= 3.188E-05 alpha_mix =  0.700 |ddv_scf|^2 =  8.469E-07


 Pert. #  1: Fermi energy shift (Ry) = 6.3630E-03-5.0487E-29


 iter #   4 total cpu time :  6657.1 secs   av.it.:  13.0


 thresh= 9.203E-05 alpha_mix =  0.700 |ddv_scf|^2 =  1.923E-08


 Pert. #  1: Fermi energy shift (Ry) = 1.4925E-02-1.2622E-29








It should have output of Born effective charges and Dielectric Constant after 
"PHONON   : 1h34m CPU  1h35m WALL" and before "Representation #   1 mode #   
1". But there is no output here.

 




My input file is as follows.


pw.in:

&CONTROL 

  calculation='scf' 

  restart_mode='from_scratch'

  tprnfor=.true.

  tstress=.true. 

  prefix ='*'

  pseudo_dir = '*/qe/qe6.8/pseudo'

  outdir='./tmp'

  

/

&SYSTEM

  ibrav= 0, 

  nat= 53, 

  ntyp= 2, 

  occupations = 'smearing', smearing = 'gauss', degauss = 1.0d-4,

  ecutwfc = 125, 

  ecutrho = 900,

/

&ELECTRONS

  conv_thr = 1.0d-12  

  diago_david_ndim=4 

/

ATOMIC_SPECIES

***
ph.in:

&inputph

  tr2_ph=1.0d-14,

  prefix ='**',

  !epsil=.false.,

  ldisp=.true.,

  nq1=2, nq2=2, nq3=2,

  amass(1)=*,

  amass(2)=*,

  outdir='./tmp',

  fildyn='*',

  !start_q=num

  !last_q=num

  recover=.true.,

 /

!0.0 0.0 0.0





Thank you!
Dr Haoran Sun




XinJiang Universiry 
Urumqi



___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users