Thank you
On Fri, Dec 11, 2015 at 12:40 AM, Axel Kohlmeyer <akohl...@gmail.com> wrote:
> On Thu, Dec 10, 2015 at 5:25 PM, mohammed shambakey
> <shambak...@gmail.com> wrote:
> > Hi
> >
> > when running pw.x on a multi-node cluster using OpenMP+MPI, is it
>
Hi
when running pw.x on a multi-node cluster using OpenMP+MPI, is it possible
to record each thread id, running on which core, in which node?
Regards
--
Mohammed
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum
para-speedup.sh
On Thu, Nov 12, 2015 at 9:17 PM, Paolo Giannozzi <p.gianno...@gmail.com>
wrote:
> Which file exactly are you refering to?
>
> Paolo
>
> On Thu, Nov 12, 2015 at 6:52 PM, mohammed shambakey <shambak...@gmail.com>
> wrote:
>
>> Thank you Pa
Thank you Paolo, but the file still contains "nelec" variable.
Is it safe to remove it? or something else needs to be changed?
Regards
On Thu, Nov 12, 2015 at 10:53 AM, Paolo Giannozzi <p.gianno...@gmail.com>
wrote:
>
>
> On Tue, Nov 10, 2015 at 3:41 PM, mo
Hi
I have two questions:
1- I'm trying to run the pwscf-small-benchmark, but it gives me an error
about namelist. So, I tried to open the generated "large-test.in" file in
PWgui, but it gives me the following syntax error:
namelist's variable "nelec" not allowed. Skipping the rest of the file
Hi
I have a question about the small size benchmarks
(pwscf-small-benchmark.tar.gz) available at (
http://qe-forge.org/gf/project/q-e/frs/?action=FrsReleaseBrowse_package_id=36).
My experience is in HPC, so please forgive my little knowledge about QE.
Tests calculate CPU time for ELECTRONS. Why
, Paolo Giannozzi <p.gianno...@gmail.com>
wrote:
> Yes: "wall time" = "what the clock on the wall shows"
>
> Paolo
>
> On Wed, Sep 30, 2015 at 12:29 PM, mohammed shambakey <shambak...@gmail.com
> > wrote:
>
>> Hi
>>
>> In outpu
Hi
In output files, there is cpu time and wall time. Does wall time include
cpu time plus any time for transer, I/O, and any thing else?
Regards
--
Mohammed
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum
re used only when computing V*psi_i and |psi_i|^2
> (instead of parallelizing over PW using all processors, one parallelizes
> over PW *and* band index i)
>
> Paolo
>
>
> On Sun, Aug 16, 2015 at 1:19 PM, mohammed shambakey <shambak...@gmail.com>
> wrote:
>
>
Hi
1. As I understood from user manual, the “world” of processors is
devided into number of “images”. Each “image” is devided into number of
“pools”. Each “pool” is devided into number of “bands”. “PW” level depends
on the least specified one of the previous four (i.e., if “band” is
Hi Suresh
I'm new to Quantum_Espresso, but how many processes you specify when
running the command (the "-procs" option with "poe", the "-n or -np" with
"mpirun/mpiexec")? How many cores does your cluster support? Because I
don't think the "-ni and -nk" options will distribute the processes by
Hi
I trying to run the following command
mpirun -np 8 pw.x < ./6x6-6+6H+F.scf.in > ./results/6x6-6+6H+F.scf.out
but it fails due to missing file /C.pz-vbc.UPF. The file is downloaded and
accessible in "pseudo" folder, but still have the same error.
Please help
Attached is the "CRASH" and
Hi
It is my first time to use quantum_espresso. I'm trying to install it on an
hpc with redhat enterprise 6 and the following modules:
gcc/4.8.1
blas/open64/64/1
mvapich2/open64/64/1.9
slurm/2.5.7
fftw3/openmpi/open64/64/3.3.3
blacs/openmpi/open64/64/1.1patch03
open64/4.5.2.1
13 matches
Mail list logo