Re: [QE-users] no geometry information at the end

2023-05-25 Thread Giuseppe Mattioli
Yes, sorry. I referred to this one Begin final coordinates ATOMIC_POSITIONS (angstrom) ... End final coordinates The problem is likely here, instead. electron_maxstep = 2 In a dummy calculation it results in this (with no geometry printed) End of self-consistent calculation

Re: [QE-users] no geometry information at the end

2023-05-25 Thread Paolo Giannozzi
No, the geometry is printed at each optimization step Paolo On 5/25/23 13:55, dldu...@uco.es wrote: Dear Giuseppe,  Thanks for your kind reply.  I am aware that I am far for any convergence criteria. I just want to have a geometry output, just a "template" of that geometry (I am also

Re: [QE-users] no geometry information at the end

2023-05-25 Thread dlduran
Dear Giuseppe, Thanks for your kind reply. I am aware that I am far for any convergence criteria. I just want to have a geometry output, just a "template" of that geometry (I am also working with another code and I wish to compare). My question is: do I have to reach a minimum convergence

Re: [QE-users] no geometry information at the end

2023-05-25 Thread Giuseppe Mattioli
Dear David Preliminary note: your convergence criteria etot_conv_thr = 1e-1 forc_conv_thr = 1e-1 ecutwfc = 10.0 ecutrho = 40.0 are *extremely* far from any sensible threshold and not suitable for a production run. Regarding your question, you have asked for 2 scf steps and two

[QE-users] no geometry information at the end

2023-05-25 Thread dlduran
Dear all, I'm working with a perovskite, doing preliminary calculations. That calculations are not needed to be converged, but at least to work properly for further optimization. I'm trying to do a vc-relax in the perovskite, in order to obtain the optimal geometry. However, at the end,

[QE-users] R: R: [QE-GPU] compilation issue : NVFORTRAN-F-1225-Unmatched directive

2023-05-25 Thread Pietro Davide Delugas
The "__GPU_MPI" flag is not strictly necessary to compile and run the MPI parallel versions with GPU. It only enables more efficient communications when the MPI library has been compiled with the necessary support for GPU-aware communications. The only essential flag for MPI is the -D__MPI.