Yes, sorry. I referred to this one
Begin final coordinates
ATOMIC_POSITIONS (angstrom)
...
End final coordinates
The problem is likely here, instead.
electron_maxstep = 2
In a dummy calculation it results in this (with no geometry printed)
End of self-consistent calculation
No, the geometry is printed at each optimization step
Paolo
On 5/25/23 13:55, dldu...@uco.es wrote:
Dear Giuseppe,
Thanks for your kind reply.
I am aware that I am far for any convergence criteria. I just want to
have a geometry output, just a "template" of that geometry (I am also
Dear Giuseppe,
Thanks for your kind reply.
I am aware that I am far for any convergence criteria. I just want
to have a geometry output, just a "template" of that geometry (I am
also working
with another code and I wish to compare). My question is: do I have to
reach a minimum convergence
Dear David
Preliminary note: your convergence criteria
etot_conv_thr = 1e-1
forc_conv_thr = 1e-1
ecutwfc = 10.0
ecutrho = 40.0
are *extremely* far from any sensible threshold and not suitable for a
production run.
Regarding your question, you have asked for 2 scf steps and two
Dear all,
I'm working with a perovskite, doing preliminary calculations. That
calculations are not needed to be converged, but at least to work
properly for further optimization. I'm trying to do a vc-relax in the
perovskite, in order to obtain the optimal geometry. However, at the
end,
The "__GPU_MPI" flag is not strictly necessary to compile and run the MPI
parallel versions with GPU.
It only enables more efficient communications when the MPI library has been
compiled with the necessary support for GPU-aware communications. The only
essential flag for MPI is the -D__MPI.