Nevermind, my monitor was playing tricks on me.
The actual problem is the line just before "ATOMIC_POSITIONS" which is
not empty, but contains several TAB characters. Remove it and the input
will work
hth
On 15/07/2024 09:49, Lorenzo Paulatto wrote:
Hello,
if I understand correctly, you u
Hello,
if I understand correctly, you used the dash character «—» instead of
minus sign «-» in many places all over the file.
On 15/07/2024 08:05, Suraj P wrote:
Dear QE users,
Im trying to do a relaxation calculation of a Copper unit cell doped with
Nickel. 75 % of the atoms are doped with
Dear Suraj, it's not a "MPI error": it is an error reading the input
data, notably one of the "cards". There is nothing obviously wrong in
your data, but consider that errors can be produced by bad characters,
DOS CR-LF characters, missing EOL (end-of-line) in the last line, ...
PAolo
On 15/0
Dear QE users,
Im trying to do a relaxation calculation of a Copper unit cell doped with
Nickel. 75 % of the atoms are doped with Nickel and the remaining atoms are
Copper.
During the vc-relax calculation, Im getting an error message as follows, and
the calculation gets automatically terminated
Dear QE Users,
I am currently running PWtk scripts in an MPI configuration using the
following setup:
MPI: Open MPI 4.0.3
PWtk: 2.0
Quantum ESPRESSO: 7.2
Occasionally during the pwtk execution, after the completion of an
intermediate pw.x run, I encounter the following error:
"
error type
Dear Anibal,
It is very likely that you are running out of memory.
Best,
Michal Krompiec
Merck KGaA
On Tue, 15 Sep 2020 at 13:42, Anibal Thiago Bezerra <
anibal.beze...@unifal-mg.edu.br> wrote:
> Dear Quantum Espresso Users and Developers,
>
> I'm using simple.x to get the dielectric function fo
Dear Quantum Espresso Users and Developers,
I'm using simple.x to get the dielectric function for a gold-Aluminium
alloy. With pure systems (Au and Al), I had no problems at all. For the
alloy, the supercell has 12 atoms and the calculation ran with the
parameters similar to the ones in the exampl
On Sun, Jan 13, 2019 at 1:37 PM Alex.Durie wrote:
What I have been unable to resolve, is the following crash which occurs
> with post-processing tools such as bands.x or pw2wannier90.x forrtl: severe
> (24): end-of-file during read, unit 99, file
> STEM/scratch.san/ad5955/Co/./co.save/wfcup1.dat
Dear all,
Sorry to dredge up an old(ish) problem, but under this new information I was
wondering if anyone could help.
As previously reported I am using QE v6.3 pw.x with the Intel compiler suite
16.3 on a linux cluster with a bank of Intel Xeon processors, and am getting
strange seemingly unr
On Mon, Dec 10, 2018 at 6:18 PM Alex.Durie wrote:
Given as Paolo has demonstrated the problem is not within the code
>
unfortunately I haven't demonstrated anything like that: I have
demonstrated that it is nothing obvious and easily reproducible. I don't
think it is a bug because it seems to me
lists.quantum-espresso.org<https://lists.quantum-espresso.org/mailman/listinfo/users>>
users
Info Page -
lists.quantum-espresso.org<https://lists.quantum-espresso.org/mailman/listinfo/users>
lists.quantum-espresso.org
This
is the mailing list for discussions about the Quantum
scf.out
> >
> >
> > and the same problem occurred. I am using PWSCF v.6.3 using the Intel
> parallel studio 2016 suite. PW was built using all intel compilers, intel
> MPI and mkl.
> >
> >
> > Many thanks,
> >
> >
> > Alex
> >
> &
+0100
> From: Paolo Giannozzi
> To: Quantum Espresso users Forum
> Subject: Re: [QE-users] MPI error in pw.x
> Message-ID:
>
> Content-Type: text/plain; charset="utf-8"
>
> If it is not a problem of your compiler or mpi libraries, it can only be
> the us
the Intel parallel
studio 2016 suite. PW was built using all intel compilers, intel MPI and mkl.
Many thanks,
Alex
Date: Sun, 9 Dec 2018 21:26:31 +0100
From: Paolo Giannozzi
To: Quantum Espresso users Forum
Subject: Re: [QE-users] MPI error in pw.x
Message-ID:
Content-Type: text/pl
If it is not a problem of your compiler or mpi libraries, it can only be
the usual problem of irreproducibility of results on different processors.
In order to figure out this, one needs as a strict minimum some information
on which exact version exhibits the problem, under which exact
circumstance
Dear experts,
I have been running pw.x with multiple processes quite successfully, however
when the number of processes is high enough, such that the space group has more
than 7 processes, where the subspace diagonalization no longer uses a serial
algorithm, the program crashes abruptly at abou
16 matches
Mail list logo