Re: [QE-users] QE-GPU: Discrepancy in forces and problem in using OMP threading

2022-03-15 Thread Filippo Spiga
Indipendently by the presence of a GPU, it is good practice NOT
oversubscribe physical cores.

So, made up example, if your socket has 128 cores and ypou want to use 16
MPI then The number of OpenMP thread is 8 (128/16). If you specify more,
you oversubscribe and as result performance may suck. It is also good
practice have a MPI:GPU ration of 1:1 or maybe 2:1. But start with 1:1.

Regarding the discrepancy in the atomic force I let the developers comment.
If you really believe it is a bug, open a bug report on the GitLab
https://gitlab.com/QEF/q-e/-/issues  and provide everything needed to
reproduce the error.

HTH

--
Filippo SPIGA ~ http://fspiga.github.io ~ skype: filippo.spiga


On Mon, 14 Mar 2022 at 17:51, Manish Kumar <
manish.ku...@acads.iiserpune.ac.in> wrote:

>   Dear Filippo,
>
> Thank you very much for your reply.
>
> The "# of threads" is the value of OMP_NUM_TRHREADS. I used nGPU=4
> and OMP_NUM_TRHREADS=48. I think the combination is not appropriate.
> The OMP_NUM_TRHREADS value should not be higher than 12. Am I correct?
>
> On one node, I am able to run the calculation. For a bigger system (388
> atoms, 3604 electrons) I used multiple nodes (2 to 4 nodes each with 4
> GPUs). The calculation got killed during the force calculation with the
> following error messages:
>
> %%
>  Error in routine addusforce_gpu (1):
>  cannot allocate buffers
> %%
>
> The slurm script (for 2 nodes) for the about calculation is the following:
> #-
> #SBATCH --nodes=2
> #SBATCH --gres=gpu:4
> #SBATCH --ntasks=8
> #SBATCH --ntasks-per-node=4
> #SBATCH --cpus-per-task=12
>
> export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
>
> mpirun -np 8 pw.x -inp input.in
> or
> mpirun -np 8 --map-by ppr:4:node:PE=12 pw.x -inp input.in
> #-
>
> I cannot solve or understand the root cause of this error. Do you have any
> suggestions to resolve it?
> Also, I would appreciate your comments on the discrepancy between CPU and
> GPU, which I mentioned in my previous email.
>
> Thank you in advance!
>
> Best regards
> Manish Kumar
> IISER Pune, India
>
> ᐧ
>
> On Fri, Mar 4, 2022 at 3:05 PM Filippo Spiga 
> wrote:
>
>> Ops, typo while typing from the phone...
>>
>> "are you using OMP_NUM_THREADS=48 or OMP_NUM_THREADS=12?"
>>
>> (everything else is correct)
>>
>> --
>> Filippo SPIGA ~ http://fspiga.github.io ~ skype: filippo.spiga
>>
>>
>> On Fri, 4 Mar 2022 at 09:33, Filippo Spiga 
>> wrote:
>>
>>> Dear Manish,
>>>
>>> when you use nGPU=4, the "# of Threads" column specify the aggregate
>>> number of threads? Meaning, are you using OMP_NUM_TRHREADS=48 or
>>> OMP_NUM_TRHREADS=48? From you email it is not clear and, if you
>>> oversubscribe physical cores with threads or processes then performance is
>>> not going to be great.
>>>
>>> Also, you must manage bindings properly otherwise MPI processed bind to
>>> GPU on another socket need top cross the awful CPU-to-CPU link. Have a look
>>> at '--map-by' option in mpirun. For 4 GPU, using 4 MPI processes and 12
>>> OpenMP threads, your mpirun will look like this:
>>>
>>> export OMP_NUM_THRTEADS=12
>>> mpirun -np 4 --map-by ppr:4:node:PE=12 ./pw.x
>>>
>>> If you are running on a HPC system managed by someone else, try reach
>>> out the User Support and get guidance on correct binding and environment.
>>> What you are observing is very likely not related to QE-GPU but how you are
>>> running your calculations.
>>>
>>> HTH
>>>
>>> --
>>> Filippo SPIGA ~ http://fspiga.github.io ~ skype: filippo.spiga
>>>
>>>
>>> On Wed, 2 Mar 2022 at 08:40, Manish Kumar <
>>> manish.ku...@acads.iiserpune.ac.in> wrote:
>>>
>>>> Dear all,
>>>>
>>>> I am using QE-GPU compiled on a 48-core Intel(R) Xeon(R) Platinum 8268
>>>> CPU @ 2.90GHz and four NVIDIA V100 GPU cards. To use all the CPUs, I am
>>>> using the OMP_NUM_THREADS variable in the slurm script. The jobs are run
>>>> with "mpirun -np [nGPU] pw.x", where nGPU refers to the number of GPUs
>>>> used. Our system size (130 electrons and 64 k-points, the input file is
>>>> given below) is comparable to some systems in J. Chem. Phys. 152, 154105
>>>> (2020); https://doi.org/10.1063/5.0005082.
>>>>
>>>> I have two i

Re: [QE-users] QE-GPU: Discrepancy in forces and problem in using OMP threading

2022-03-04 Thread Filippo Spiga
Ops, typo while typing from the phone...

"are you using OMP_NUM_THREADS=48 or OMP_NUM_THREADS=12?"

(everything else is correct)

--
Filippo SPIGA ~ http://fspiga.github.io ~ skype: filippo.spiga


On Fri, 4 Mar 2022 at 09:33, Filippo Spiga  wrote:

> Dear Manish,
>
> when you use nGPU=4, the "# of Threads" column specify the aggregate
> number of threads? Meaning, are you using OMP_NUM_TRHREADS=48 or
> OMP_NUM_TRHREADS=48? From you email it is not clear and, if you
> oversubscribe physical cores with threads or processes then performance is
> not going to be great.
>
> Also, you must manage bindings properly otherwise MPI processed bind to
> GPU on another socket need top cross the awful CPU-to-CPU link. Have a look
> at '--map-by' option in mpirun. For 4 GPU, using 4 MPI processes and 12
> OpenMP threads, your mpirun will look like this:
>
> export OMP_NUM_THRTEADS=12
> mpirun -np 4 --map-by ppr:4:node:PE=12 ./pw.x
>
> If you are running on a HPC system managed by someone else, try reach out
> the User Support and get guidance on correct binding and environment. What
> you are observing is very likely not related to QE-GPU but how you are
> running your calculations.
>
> HTH
>
> --
> Filippo SPIGA ~ http://fspiga.github.io ~ skype: filippo.spiga
>
>
> On Wed, 2 Mar 2022 at 08:40, Manish Kumar <
> manish.ku...@acads.iiserpune.ac.in> wrote:
>
>> Dear all,
>>
>> I am using QE-GPU compiled on a 48-core Intel(R) Xeon(R) Platinum 8268
>> CPU @ 2.90GHz and four NVIDIA V100 GPU cards. To use all the CPUs, I am
>> using the OMP_NUM_THREADS variable in the slurm script. The jobs are run
>> with "mpirun -np [nGPU] pw.x", where nGPU refers to the number of GPUs
>> used. Our system size (130 electrons and 64 k-points, the input file is
>> given below) is comparable to some systems in J. Chem. Phys. 152, 154105
>> (2020); https://doi.org/10.1063/5.0005082.
>>
>> I have two issues/questions with QE-GPU:
>> 1. The largest discrepancy in the atomic force between CPU and GPU is
>> 1.34x10^-4 Ry/Bohr. What is the acceptable value for the discrepancy?
>> 2. I am experiencing a significant increase in CPU time when I use
>> multiple OMP threads for SCF calculations, as you can see below. Could you
>> please suggest any solution to this and let me know if I am doing anything
>> incorrectly? Any help would be much appreciated.
>> The details are as follows:
>>
>> nGPU=1
>> 
>> # of Threads  CPU Time (s)
>> WALL Time(s)
>> 01   254.23
>>  384.27
>> 02   295.45
>>  466.33
>> 03   328.89
>>  538.62
>> 04   348.81
>>  602.85
>> 08   501.31
>>  943.32
>> 12   698.45
>>  1226.86
>> 16   836.71
>>  1505.39
>> 20   905.77
>>  1645.66
>> 24   1094.81
>>  1973.97
>> 28   1208.93
>>  2278.81
>> 32   1403.27
>>  2570.51
>> 36   1688.97
>>  3068.91
>> 40   1820.06
>>  3306.49
>> 44   1905.88
>>  3603.96
>> 48   2163.18
>>  4088.75
>> 
>>
>> nGPU=2
>> 
>> # of Threads  CPU Time (s)
>> WALL Time(s)
>> 01   226.69
>>  329.51
>> 02   271.29
>>  336.65
>> 03   312.36
>>  335.24
>> 04   341.50
>>  333.20
>> 06   400.42
>>  328.66
>> 12   632.82
>>  332.90
>> 24   992.02
>>  335.28
>> 48   

Re: [QE-users] QE-GPU: Discrepancy in forces and problem in using OMP threading

2022-03-04 Thread Filippo Spiga
Dear Manish,

when you use nGPU=4, the "# of Threads" column specify the aggregate number
of threads? Meaning, are you using OMP_NUM_TRHREADS=48 or
OMP_NUM_TRHREADS=48? From you email it is not clear and, if you
oversubscribe physical cores with threads or processes then performance is
not going to be great.

Also, you must manage bindings properly otherwise MPI processed bind to GPU
on another socket need top cross the awful CPU-to-CPU link. Have a look at
'--map-by' option in mpirun. For 4 GPU, using 4 MPI processes and 12 OpenMP
threads, your mpirun will look like this:

export OMP_NUM_THRTEADS=12
mpirun -np 4 --map-by ppr:4:node:PE=12 ./pw.x

If you are running on a HPC system managed by someone else, try reach out
the User Support and get guidance on correct binding and environment. What
you are observing is very likely not related to QE-GPU but how you are
running your calculations.

HTH

--
Filippo SPIGA ~ http://fspiga.github.io ~ skype: filippo.spiga


On Wed, 2 Mar 2022 at 08:40, Manish Kumar <
manish.ku...@acads.iiserpune.ac.in> wrote:

> Dear all,
>
> I am using QE-GPU compiled on a 48-core Intel(R) Xeon(R) Platinum 8268 CPU
> @ 2.90GHz and four NVIDIA V100 GPU cards. To use all the CPUs, I am using
> the OMP_NUM_THREADS variable in the slurm script. The jobs are run with
> "mpirun -np [nGPU] pw.x", where nGPU refers to the number of GPUs used. Our
> system size (130 electrons and 64 k-points, the input file is given below)
> is comparable to some systems in J. Chem. Phys. 152, 154105 (2020);
> https://doi.org/10.1063/5.0005082.
>
> I have two issues/questions with QE-GPU:
> 1. The largest discrepancy in the atomic force between CPU and GPU is
> 1.34x10^-4 Ry/Bohr. What is the acceptable value for the discrepancy?
> 2. I am experiencing a significant increase in CPU time when I use
> multiple OMP threads for SCF calculations, as you can see below. Could you
> please suggest any solution to this and let me know if I am doing anything
> incorrectly? Any help would be much appreciated.
> The details are as follows:
>
> nGPU=1
> 
> # of Threads  CPU Time (s)
> WALL Time(s)
> 01   254.23
>  384.27
> 02   295.45
>  466.33
> 03   328.89
>  538.62
> 04   348.81
>  602.85
> 08   501.31
>  943.32
> 12   698.45
>  1226.86
> 16   836.71
>  1505.39
> 20   905.77
>  1645.66
> 24   1094.81
>  1973.97
> 28   1208.93
>  2278.81
> 32   1403.27
>  2570.51
> 36   1688.97
>  3068.91
> 40   1820.06
>  3306.49
> 44   1905.88
>  3603.96
> 48   2163.18
>  4088.75
> 
>
> nGPU=2
> 
> # of Threads  CPU Time (s)
> WALL Time(s)
> 01   226.69
>  329.51
> 02   271.29
>  336.65
> 03   312.36
>  335.24
> 04   341.50
>  333.20
> 06   400.42
>  328.66
> 12   632.82
>  332.90
> 24   992.02
>  335.28
> 48   1877.65
> 438.40
> 
>
> nGPU=4
> 
> # of Threads  CPU Time (s)
> WALL Time(s)
> 01   237.48
>  373.21
> 02   268.85
>  382.92
> 03   311.39
>  391.29
> 04   341.14
>  391.71
> 06   422.42
>  391.13
> 12   632.94
>  396.75
> 

Re: [QE-users] Bug in GPU acceleration of QE7.0

2021-12-29 Thread Filippo Spiga
Dear Xiongyi,

could you share a bit more about how you compile and run the calculation?

It will be even better if you can you also share the problem (including
input case, how you built and how you run) on the QE GitLab to track the
bug properly. https://gitlab.com/QEF/q-e/-/issues

Thank you

--
Filippo SPIGA ~ http://fspiga.github.io ~ skype: filippo.spiga


On Wed, 29 Dec 2021 at 06:44, LEUNG Clarence  wrote:

> Dear QE developers
>
>
>
> Recently, I run the pw.x with GPU acceleration of QE7.0. And I run the
> pw.x with Serial version, my gpu is NVIDIA A100.
>
> I found that Total Wall time is much larger than Total CPU time as follows:
>
>
>
>  PWSCF:   4d 2h16m CPU   5d 9h56m WALL
>
>
>
> In addition, I found that the electron iteration step with GPU
> acceleration is faster than that without GPU acceleration (Intel GOLD 48
> cores). But the pw.x  with GPU acceleration will stop at the following
> stage for a long time (about an hour) and continue to run.
>
>
>
>  atom1 type  2   force = 0.009304860.00537216   -0.00585990
>
>  atom2 type  2   force = 0.006325510.00365204   -0.00023214
>
>  atom3 type  2   force = 0.005001370.00288754   -0.01079160
>
>  atom4 type  2   force =-0.00857256   -0.004949370.00781754
>
>  atom5 type  2   force = 0.000341480.00747704   -0.00149244
>
>  atom6 type  2   force =-0.00351869   -0.000665570.00423555
>
>  atom7 type  1   force = 0.001913560.00110479   -0.00243622
>
>  atom8 type  1   force = 0.000596580.00034443   -0.00198145
>
>  atom9 type  1   force =-0.00076710   -0.00320224   -0.00411599
>
>  atom   10 type  3   force =-0.01006001   -0.005808150.00783630
>
>  atom   11 type  2   force = 0.003733490.00215553   -0.00351109
>
>  atom   12 type  2   force =-0.00177126   -0.00102264   -0.00201425
>
>  atom   13 type  2   force = 0.007222710.004170040.00216690
>
>  atom   14 type  2   force = 0.015667300.009045520.01426725
>
>  atom   15 type  2   force = 0.00664604   -0.00344279   -0.00149244
>
>  atom   16 type  2   force =-0.00233575   -0.002714490.00423555
>
>  atom   17 type  1   force =-0.00163852   -0.000946000.00488642
>
>  atom   18 type  1   force =-0.00031498   -0.00018185   -0.00098028
>
>  atom   19 type  1   force =-0.003156770.00093679   -0.00411599
>
>  atom   20 type  2   force = 0.00257577   -0.00119790   -0.00022574
>
>  atom   21 type  2   force =-0.00511906   -0.008599370.00075321
>
>  atom   22 type  2   force = 0.00197298   -0.00113957   -0.00328624
>
>  atom   23 type  2   force = 0.001145300.00055509   -0.00017065
>
>  atom   24 type  2   force =-0.01774445   -0.01024477   -0.00958551
>
>  atom   25 type  2   force =-0.00729390   -0.004211130.00597704
>
>  atom   26 type  1   force = 0.00238826   -0.002226000.00428884
>
>  atom   27 type  1   force =-0.00018626   -0.00058283   -0.00369369
>
>  atom   28 type  1   force =-0.00177792   -0.001026480.9453
>
>  atom   29 type  2   force = 0.000250480.00282963   -0.00022574
>
>  atom   30 type  2   force =-0.01000680   -0.000133550.00075321
>
>  atom   31 type  2   force =-0.00410.00227844   -0.00328624
>
>  atom   32 type  2   force = 0.001053380.00071431   -0.00017065
>
>  atom   33 type  2   force = 0.006639750.00383346   -0.00386934
>
>  atom   34 type  2   force =-0.00284220   -0.001640940.00579168
>
>  atom   35 type  1   force =-0.000733640.003181290.00428884
>
>  atom   36 type  1   force =-0.000597880.00013011   -0.00369369
>
>  atom   37 type  1   force = 0.005659340.00326742   -0.00016157
>
>
>
>  Total force = 0.052465 Total SCF correction = 0.000670
>
>
>
>  Computing stress (Cartesian axis) and pressure
>
>
>
> I write to report this issue and wonder whether someone have came across
> the same problem or just my inappropriate complication.
>
>
>
>
>
> Thanks!
>
> Best regards,
>
> LIANG Xiongyi
>
> Postdoctoral Fellow
>
> The University of Hong Kong
>
>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] [QE-GPU] Disable GPU acceleration for some calculations

2021-11-29 Thread Filippo Spiga
This advice is still valid: build one version with GPU support and one
without.

For what type of pw.x calculations (and why) you do not want to use GPU?

--
Filippo SPIGA ~ http://fspiga.github.io ~ skype: filippo.spiga


On Sun, 28 Nov 2021 at 14:21, Oliver Generalao 
wrote:

> Hi,
> Just to share.
> I had a similar situation years ago (2016-2017) where I had to do
> benchmarking with and without a GPU. I had to compile two versions of
> PWsfc, *pw.x* and *pw-gpu.x*. I am not sure if they had changed a lot
> since then.
>
>
> On Sun, Nov 28, 2021 at 10:10 PM Anson Thomas 
> wrote:
>
>> Dear QE experts,
>>
>> I have installed QE 6.8 with GPU acceleration (Ubuntu 18.04.5 LTS
>> (GNU/Linux 4.15.0-135-generic x86_64, Processor: Intel Xeon Gold 5120 CPU
>> 2.20 GHz (2 Processor) RAM: 96 GB Graphics Card: NVIDIA Quadro P5000 (16
>> GB)).
>> For some pw.x calculations, however, I desire to not use the GPU
>> acceleration. Is there a way (some command-line option, or adding lines to
>> bashrc or some other way) to not use GPU acceleration for a particular
>> calculation without recompiling QE?
>>
>> --
>> *Anson Thomas*
>> M.Sc. Chemistry, IIT Roorkee
>>
>> ___
>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>> users mailing list users@lists.quantum-espresso.org
>> https://lists.quantum-espresso.org/mailman/listinfo/users
>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] [QE-GPU] Ylm out of bounds

2021-11-03 Thread Filippo Spiga
Dear Daniel,

can you open a bug report here https://gitlab.com/QEF/q-e/-/issues and
provide a input case (+ pseudo) to reproduce the problem?

Thank you

--
Filippo SPIGA ~ http://fspiga.github.io ~ skype: filippo.spiga


On Tue, 2 Nov 2021 at 15:09, Daniel B. Straus  wrote:

> Hello,
>
>
>
> Jobs that ran fine with 6.7 are failing in 6.8 with the following error:
>
>
>
>
> %%
>
>  Error in routine  ylmr (6):
>
>  l>4 => out of bounds in Ylm with CUDA Kernel
>
>
> %%
>
>
>
> I am using the containers published at
> https://ngc.nvidia.com/catalog/containers/hpc:quantum_espresso for both
> 6.7 and 6.8.
>
>
>
> Any ideas?
>
>
>
> Thanks,
>
> Daniel
>
>
>
> --
>
> Daniel Straus
>
> Postdoctoral Research Associate
>
> Department of Chemistry
>
> Princeton University
>
> Frick Laboratory A09
>
> Princeton, NJ 08544
>
> dstr...@princeton.edu
>
>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] [SUSPECT ATTACHMENT REMOVED] [QE-GPU] OpenMP is not working with my compilation

2021-08-03 Thread Filippo Spiga
Dear Takahiro,

if the cluster supports containers, you could try to deploy the Quantum
ESPRESSO on the NVIDIA GPU Cloud (NCG). See here:
https://ngc.nvidia.com/catalog/containers/hpc:quantum_espresso (latest
version uploaded is v6.7)

Make sure you run the calculation *only* on the socket where the GPU is
attached. Consult your HPC centre User Support team to understand how to do
it.

--
Filippo SPIGA ~ http://fspiga.github.io ~ skype: filippo.spiga


On Mon, 2 Aug 2021 at 00:01, Takahiro Chiba <
takahiro_ch...@eis.hokudai.ac.jp> wrote:

> Dear experienced users,
>
> I have trouble in utilizing OpenMP with my compilation. From the
> output file, pw.x 6.8 recognizes "OMP_NUM_THREADS=2", but it took same
> time as "OMP_NUM_THREADS=1", and according to PBS batch queue, only
> 100% (not 200%) of CPU is used. Therefore, QE 6.8 with GPU is not as
> fast as expected.
>
> I used nvidia HPC SDK 20.9, cuda 10.1, and Intel MKL 2021.2. The node
> has two Xeon Gold 6248, one Tesla V100 32GB, and 768GB of RAM.
> Benchmark results and make.inc are attached as tarball.
>
> Could you please point out my mistake?
>
> ---Sender---
> Takahiro Chiba
> 1st-year student at grad. school of chem. sci. and eng., Hokkaido Univ.
> Expected graduation date: Mar. 2023
> takahiro_ch...@eis.hokudai.ac.jp
> -
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] CUDA-compiled version of Quantum Espresso

2021-06-24 Thread Filippo Spiga
Hello Chiara,

Maxwell and Turing architectures are not a good fit due to their reduced
double precision support. They target different segment. Kepler, Pascal,
Volta and Ampere are a good fit. Kepler and Pascal are old, I strongly
suggest Volta and Ampere (latest being better, obviously). If you have
Pascal, it can work.

Indipendently by the GPU architecture, there are GPU SKU that are more
suitable for double precisions than others. Quadro cards are not a good fit
with two exceptions: Quadro GP100 (very old, you can;t buy this new
anymore) and GV100 (you may be able to find some but is reached / has
reached EOL). For serious HPC doiuble-precision calculation you need the
Tesla product line, A100 PCIe or A100 SXM (HGX or DGX products).

In short: don't buy Quandro cards and dont' buy GeForce, performance will
not be optimal. If you are procuring a proper HPC system, you need to
target A100 products.

HTH

--
Filippo SPIGA ~ http://fspiga.github.io ~ skype: filippo.spiga


On Sun, 20 Jun 2021 at 16:29, Chiara Biz  wrote:

> Dear QE team,
>
> we would like to compile the GPU-QE since we learnt that is faster for
> spin-polarized wavefunctions.
> We would like to ask you which NVIDIA architecture is supported by QE:
> Maxwell, Pascal, Turing, Ampere?
> Can QE support SLY?
>
> These are crucial aspects for us because they will determine our choices
> on the market.
>
> Thank you very much for your attention and have a nice day.
>
> Yours Sincerely,
> Chiara Biz (MagnetoCat SL/SpinCat consortium)
>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[Pw_forum] New pre-production GPU-accelerated Quantum ESPRESSO available (v0.2)

2017-07-05 Thread Filippo Spiga
Hi all,

I tag a new pre-production version, v0.2. See 
https://github.com/RSE-Cambridge/qe-gpu/releases/tag/v0.2

Please refer to README.md for additional information

Happy Computing

--
Filippo SPIGA ~ http://fspiga.github.io ~ skype: filippo.spiga


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] QE installation with OpenMPI

2017-01-25 Thread Filippo SPIGA
Out of cusiosity, have you tried QE 6.0?

On Jan 25, 2017, at 7:00 AM, Aldo Ugolotti <a.ugolo...@campus.unimib.it> wrote:
>> and have you set your PATH and LD_LIBRARY_PATH correctly after installing 
>> Open MPI?
> Yes, I have. PATH and LD_LIBRARY_PATH are including openMPI installation 
> folder.
>> 
>> Please send us the "install.config.log" file, this will hel understanding 
>> what it is going on ...
>> 
>> 
> The log files can be found here (expiring in 48h): 
> https://expirebox.com/download/71ceaf682dbf930ab015daed17f4407f.html

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] QE installation with OpenMPI

2017-01-25 Thread Filippo SPIGA
On Jan 25, 2017, at 2:10 AM, Aldo Ugolotti <a.ugolo...@campus.unimib.it> wrote:
> I have tried it and unluckily it is not effective. I am still able to run a 
> parallel calculation within the same node, but if I ask to start the tasks on 
> a different node, the mpirun command remains stuck.

and have you set your PATH and LD_LIBRARY_PATH correctly after installing Open 
MPI?

Please send us the "install.config.log" file, this will hel understanding what 
it is going on ...

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] QE installation with OpenMPI

2017-01-24 Thread Filippo SPIGA
On Jan 24, 2017, at 9:51 AM, Aldo Ugolotti <a.ugolo...@campus.unimib.it> wrote:
> I am not relying on Intel compiler, I only have gfortran for now, but I just 
> changed the flags accordingly like FC=gfortran CC=gcc, but I still have 
> issues.

If you do not have Intel compilers installed, just run "./configure 
--enable-parallel" without specify MPIF90, FC or CC.

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Intel Xeon Phi support in QE6.0?

2017-01-15 Thread Filippo SPIGA
Dear Rolly,

On Jan 13, 2017, at 5:23 PM, Rolly Ng <roll...@gmail.com> wrote: 
> 
> I would like to know the latest development in support of the Intel Xeon Phi 
> accelerator in QE6.0.

you need to be a bit more specific: Intel Phi Knight Corner (KNC) or Intel Phi 
Knight Landing (KNL)?

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] pw.x slurm srun failed with intel mpi?

2017-01-15 Thread Filippo SPIGA
Dear Rolly,

this mailing-list is about QE, not about fixing people's HPC clusters. I am 
sorry but you need to find someone in your IT department that can help you on 
this matter

On Jan 15, 2017, at 7:51 AM, Rolly Ng <roll...@gmail.com> wrote:
> I have srun problem on ubuntu 16.04 cluster with intel mpi. Could you please 
> me to check what is going on? Thank you!
> I am trying to install slurm in a cluster running ubuntu 16.04.

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] QE-GPU problems.

2017-01-01 Thread Filippo SPIGA
On Dec 30, 2016, at 3:48 AM, Oliver Generalao <oliver.b.genera...@gmail.com> 
wrote:
> The only catch, which Mr Spiga always points out, is that the DP performance 
> on GTX cards  is very slow , which is like 1/32 (or 1/24) of the Single 
> Precision

So GTX 1070 is not worth using. A little bit of tuning of your input, better 
compile flags and the CPU will perform the same or faster than the CPU+GPU.

Because you have a GTX 1070 it does mean you must use it. If you buy a GPU only 
for QE, do not buy that model. Very simple.

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


[Pw_forum] Quantum ESPRESSO Developers Meeting 2017 (plus details to attend in live streaming!)

2016-12-22 Thread Filippo SPIGA
Dates
Monday January 9th, 2017 (full day) and Tuesday January 10th, 2017 (half day)

Location
Kastler Lecture Hall, Adriatico Guest House, ICTP (Italy)

Agenda

Monday January 9th

09:00 - 09:30   Opening talk
Paolo Giannozzi (U.Udine)

09:30 - 09:50   New XML I/O in QE
Pietro Delugas (SISSA)

09:50 - 10:10   New FFT data distribution to improve scalability,
Stefano de Gironcoli (SISSA)

10:10 - 10:30   Diagonalization algorithms with reduced subspace diagonalization
Anoop Chandran (ICTP / MHPC student)

10:30 - 11:00   Break

11:00 - 11:20   Status report on AiiDA, and the Materials Cloud portal for QE 
data
Nicola Marzari (EPFL)

11:20 - 11:50   Quantum Espresso enabling on Marconi
Fabio Affinito (CINECA)

11:50 - 12:20   Optimizing EXX calculations in QE for Intel Xeon Phi
Thorsten Kurth/Taylor Barnes (NERSC)

12:20 - 13:00   Discussions

13:00 - 14:00   Lunch

14:00 - 14:20   Progress in EPW and discuss test-farm & automatic documentation
Samuel Ponce (U.Oxford)

14:20 - 14:40   Using QE as a library: Lessons learned by developing the 
Sternheimer GW code
Martin Schlipf (U.Oxford)

14:40 - 15:00   thermo_pw: a Fortran driver for the quantum ESPRESSO routines
Andrea Dal Corso (SISSA)

15:00 - 15:20   WEST: large-scale excited states calculations
Marco Govoni (U.Chicago)

15:30 - 17:30   Discussions

Tuesday January 10th

09:30 - 09:50   DFT and DFPT for two-dimensional systems: implementation of 2D 
Coulomb cutoff in PW and PH
Thibault Sohier (EPFL)

09:50 - 10:10   Hubbard interactions from density functional perturbation theory
Iurii Timrov (EPFL)

10:10 - 10:30   New development in exact exchange calculations
Ivan Carnimeo (SISSA)

10:30 - 11:00   Break

11:00 - 11:20   Magnetic excitations
Tommaso Gorni (SISSA)

11:30 - 13:00   Discussions

13:00 - 14:00   Lunch


Recordings
All talks will be first broadcasted via YouTube streaming (see links below) and 
then all recordings uploaded in a separate YouTube channel. Discussions will 
not be recorded.

Monday morning   from 09:00 to 10:20 = https://youtu.be/xE4hOlt-LXY

Monday morning   from 11:00 to 12:20 = https://youtu.be/lpR91QQApMU

Monday afternoon from 14:00 to 15:20 = https://youtu.be/6ghwf3bxtA4

Tuesday morning  from 09:30 to 10:30 = https://youtu.be/d-rNgX7uhW4

Tuesday morning  from 11:00 to 11:20 = https://youtu.be/Q8nHysyUIoY

Social media
Follow Quantum ESPRESSO on Facebook [1] and Twitter [2] and LinkedIn [3]


A big thank you to Ivan Girotto (ICTP) and all ICTP staff for hosting the 
event, the technical support (especially for the streaming) and the logistics.

[1] https://www.facebook.com/QuantumESPRESSO/
[2] https://twitter.com/QuantumESPRESSO
[3] https://www.linkedin.com/groups/4650183

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Howto DIY a small cluster with Infiniband for QE simulation?

2016-12-18 Thread Filippo SPIGA
On Dec 17, 2016, at 3:33 PM, Rolly Ng <roll...@gmail.com> wrote:
> (1) how to configure the infiniband network on OpenSuSE 13.2?

I am afraid you need to google this, the pw_forum is not a mailing-list to help 
people solve Linux issues.


> (2) how to compile QE to make use of the infiniband hardware?

If IB drivers and OFED/MOFED are correcty installed, the MPI library will 
detect it and run MPI traffic on Infiniband. If not, it will use ethernet 
(assuming there is ethernet)


> (3) how to compile and run GPU (pw-gpu.x) across the cluster?

It is like run the normal PW, you enable the parallel mode 
("--enable-parallel") and use MPI run. Keep MPI ranks same numbers as GPU.

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] QE-GPU (installation)

2016-12-06 Thread Filippo SPIGA
On Dec 6, 2016, at 4:50 AM, Phanikumar Pentyala <phani12.c...@gmail.com> wrote:
> Initially I written my GPU was Tesla K40, but it is not. Actual series name 
> was NVIDIA Tesla K40m. Sorry for that, that was my mistake

Not a difference between the two apart the cooling.


> If I try compilation with QE-5.3.0, It was different error (config.status: 
> error: cannot find input file: ../include/fft_defs.h.in)

5.4, not 5.3. Not worth looking at 5.3 with the latest version


> Intel® math kernel library (MKL)

Yes, download them for free


> Intel® Message Passing Interface (MPI) library installation required or not?

No, download Open MPi or install it via apt-get/yum/whatever on the system

Moreover make sure you have any fort of FFTW installed in the system as well.


--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] QE-GPU (installation)

2016-12-05 Thread Filippo SPIGA
On Dec 5, 2016, at 2:11 PM, Phanikumar Pentyala <phani12.c...@gmail.com> wrote:
> After change the compilation command:  ./configure --enable-parallel 
> --enable-openmp --with-scalapack --enable-cuda --with-gpu-arch=sm_35 
> --with-cuda-dir=/usr/local/cuda-8.0/bin --without-magma --with-phigemm (now I 
> changed with scalapack)

You have a single server with 2 K40 right? No point of using ScaLAPACK for only 
2 MPI processes, please disable scaLAPACk and enable MAGMA.


> configure: error: Cannot compile against this version of Quantum ESPRESSO. 
> Use v5.4

Weird, it shoudl work. I tested it multiple time a month ago in my system. I 
will look into this or this night or tomorrow and get back to you.

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] QE-GPU (installation)

2016-12-05 Thread Filippo SPIGA
Dear Phanikumar,

upgrade to QE v5.4 and upgrade to the latest QE-GPU from GIT 
(https://github.com/fspiga/QE-GPU/archive/5.4.tar.gz). I've just create a tag 
to simplify checkout operations).

Moreover you want to use MAGMA if you do not use ScaLAPACK. if you disable 
both, most likely you will run single core LAPACK and there will be no gain 
there in having GPU.

On Dec 5, 2016, at 5:57 AM, Phanikumar Pentyala <phani12.c...@gmail.com> wrote:
> I am trying to install QE-GPU-14.10.0 in my server (details below). while 
> configuration it's not generating files and showing some error 
> (config.status: error: cannot find input file: ../include/fft_defs.h.in). I 
> am new to linux OS, please help me for successful installation of GPU version.
> 
> Command I used for configuration: ./configure --enable-parallel 
> --enable-openmp --enable-cuda --without-scalapack --with-gpu-arch=sm_35 
> --with-cuda-dir=/usr/local/cuda-7.5/bin --without-magma --with-phigemm

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Large attachments

2016-12-03 Thread Filippo Spiga
On Dec 2, 2016, at 11:01 AM, Paolo Giannozzi <p.gianno...@gmail.com> wrote:
> There are various places where you can store files for free.

or instead of having your own there are free services on the web, for example

https://expirebox.com
https://file.town
https://uguu.se

and then many more, just google it.

--
Filippo SPIGA ~ http://fspiga.github.io ~ skype: filippo.spiga

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] workaround: add preprocessing option -Dzdotc=zdotc wrapper to DFLAGS

2016-11-30 Thread Filippo SPIGA
Sorry Jiantuo,

which machine are you using? Which compilers? Which BLAS library?

On Nov 30, 2016, at 12:12 PM, Jiantuo Gan <jiantuo.gan@gmail.com> wrote:
> Dear all,
> 
> I have a problem with running pw2wannier90.x, which I understand is from 
> bugging from calling function zdotc.
> However, I am not quite sure how I can do this workaround:
> workaround: add preprocessing option -Dzdotc=zdotc wrapper to DFLAGS 
> 
> Can anyone please give me any hints, I am fresh on this, thanks a lot!
> -- 
> Yours sincerely,
> J. Gan
>  
> PhD Thin Film Solar Cells, University of Oslo 2012-2015
> MSc Nanoscience, Lund University 2010-2012
> BSc Materials Physics Hebei University of Technology 2006-2010
> Tel: +47 96855568
> Linked in:http://se.linkedin.com/pub/jiantuo-gan/31/960/454
> 
>  
> 
> 
> 
> -- 
> Yours sincerely,
> J. Gan
>  
> PhD Thin Film Solar Cells, University of Oslo 2012-2015
> MSc Nanoscience, Lund University 2010-2012
> BSc Materials Physics Hebei University of Technology 2006-2010
> Tel: +47 96855568
> Linked in:http://se.linkedin.com/pub/jiantuo-gan/31/960/454
> 
>  
> _______
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Question regarding QE-GPU 6 availability

2016-11-28 Thread Filippo Spiga
On Nov 28, 2016, at 4:16 PM, Josue Itsman Clavijo Penagos 
<jiclavi...@unal.edu.co> wrote:
> I'd like to openly answer if the GPU feature is already available for QE v. 6 
> , since I'm testing that QE version and it seems to me the GPU tarball its 
> located nowhere among the installation packages.

No, it is not. Too early for any release.

--
Filippo SPIGA ~ http://fspiga.github.io ~ skype: filippo.spiga


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Running in Parallel

2016-11-23 Thread Filippo SPIGA
On Nov 22, 2016, at 11:48 PM, Mofrad, Amir Mehdi (MU-Student) 
<am...@mail.missouri.edu> wrote:
> After I compiled version 6 I can't run it in parallel.

A bit more information about how you compiled and how your run will be useful 
to understand your problem

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] QE 6.0 major include changes?

2016-10-29 Thread Filippo SPIGA
On Oct 27, 2016, at 6:13 PM, Paolo Giannozzi <p.gianno...@gmail.com> wrote:
> '--with-netlib' replaces '--with-internal-lapack' and '--with-internal-blas'

and now QE uses LAPACK from netlib [1] (which has BLAS shipped in the same 
archive) instead of a olf lapack version. 

If you are using a XC-40 with Intel compiler I also assume you are using Intel 
MKL. CLE provides wrappers to simplify people lives, they should shomehow make 
sure you link math libraries correctly.

It will be very useful if you can report to us:
- the exact environment you used (module list)
- the exact version of Intel compiler (ifort --version)
- the CLE version on your system 
- the output of configure (located in "install/config.log")

Thanks

[1] http://www.netlib.org/lapack/index.html

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] missing tutorials

2016-10-11 Thread Filippo Spiga
On 11 Oct 2016, at 05:47, yelena <yel...@ipb.ac.rs> wrote:
> I have young co-worker who would highly benefit from watching videos and  
> tutorials that were available on this link. Can we find it somewhere  else?

I temporarily removed them to avoid troubles with a QE backend server which may 
result in stopping the pseudo-potential automatic retrieval system. The risk 
appeared during the v6 release process.

Those files can be restored easily, just the time needed for a rsync. Nicola is 
offering to provide a backup website for them and this is a good thing for the 
future.

I've also looked into converting the videos in a format friendly to be uploaded 
on YouTube. This is feasible, despite the poor resolution. It just require time.

--
Filippo SPIGA
* Sent from my iPhone, sorry for typos *

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


[Pw_forum] Version 6.0 of Quantum ESPRESSO is available for download

2016-10-04 Thread Filippo SPIGA
Dear all,

I am pleased to announce that Version 6.0 of Quantum ESPRESSO (SVN revision 
r13079 + fixes) is now available for download.

You can find all related packages published on the QE-FORGE website at this 
link:
http://qe-forge.org/gf/project/q-e/frs/?action=FrsReleaseView_id=224

Or download directly "qe-6.0.tar.gz" here:
http://qe-forge.org/gf/download/frsrelease/224/1044/qe-6.0.tar.gz

I want to underline few important differences in the packaging of this release:
- "qe-6.0.tar.gz" contains *ALL* core packages (type 'make' for more detail)
- "qe-6.0-examples.tar.gz" contains various examples for each core packages 
provided in the suite
- "qe-6.0-test-suite.tar.gz" contains the test-suite for validation purposes 
(PW, CP and EPW supported)

The same package structure will be use to future releases as well. Please refer 
to the file "Doc/release-notes" for additional details about the release (new 
features, fixes, incompatibilities, known bugs). For any new bug, suspicious 
misbehavior or any proven reproducible wrong result please get in contact with 
the developers by writing directly to q-e-develop...@qe-forge.org

Happy Computing!

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] can not install parallel run of QE

2016-09-12 Thread Filippo SPIGA
Dear Tao,

are you using a beta v6.0 version or a olv v5.x version?


On Sep 12, 2016, at 8:49 PM, Yu, Tao <t...@tntech.edu> wrote:
> Hi,
> 
> During the installation process, after running ./configure, it returned a 
> piece of information,
> 
> Parallel environment not detected \(is this a parallel machine?\).\
> Configured for compilation of serial executables.
> 
> What happened there? Does this mean I can only run with 1CPU? How to fix that?
> 
> Many thanks,
> 
> Tao

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] ask for test runs for installation

2016-09-07 Thread Filippo Spiga
On Sep 7, 2016, at 8:54 PM, Yu, Tao <t...@tntech.edu> wrote:
> We installed QE in our computer, and would like to do some test runs. Is 
> there any test run file we can use to validate our installation?

Yes, test-suite (assuming you are looking at 5.4 or later). You need to do 

$ make test-suite

and serial tests for PW and CP will be performed

--
Filippo SPIGA ~ http://fspiga.github.io ~ skype: filippo.spiga


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] errors testing espresso 5.4.0

2016-09-06 Thread Filippo SPIGA
On Sep 5, 2016, at 6:19 PM, Fabricio Cannini <fcann...@gmail.com> wrote:
> I can make some more tests if it helps, just tell me what to do.

Grab this latest full snapshot of QE SVN repo I just generated 
http://qe-forge.org/snapshots/espresso-r12924-2016-09-07.tar.gz

It contains Paolo's fixes. Feel free to try run it and give us feedback if the 
issue you saw still persists on the machine you run the initial tests.

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] errors testing espresso 5.4.0

2016-09-04 Thread Filippo SPIGA
Dear Fabricio,

are you running these tests manually or via test-suite?

Well, the error messages are quite clear so it is a matter of investigate if 
these are repsoducible. We are tracking bugs in a very "amateurial way" at the 
moment but we will look into it.

Thanks for reporting 

On Sep 3, 2016, at 1:50 AM, Fabricio Cannini <fcann...@gmail.com> wrote:
> Hello there
> 
> I'm facing errors in a few tests of espresso 5.4.0.
> I'm compiling it a centos 6.x machine in the following manner:
> =
> FC = intel 15.0
> MPI = impi 5.0
> BLAS/LAPACK = mkl 11.2
> FFT = fftw 3.3.5
> 
> BLAS_LIBS="-lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core"
> LAPACK_LIBS="-lmkl_core"
> FFT_LIBS="-lfftw3"
> FFLAGS="-O2 -assume byterecl -g -traceback -fpe0 -CB -openmp"
> MPIF90=mpiifort
> 
> ./configure --enable-openmp --enable-parallel --without-scalapack
> 
> make pw cp ph neb epw
> =
> 
> 
> When running the pw tests, some of those fail no matter how many mpi 
> processes I use.
> 
> 'pw_b3lyp/b3lyp-h2o.in' and 'pw_b3lyp/b3lyp-O.in' fail with the error 
> message:
> ---
> forrtl: severe (408): fort: (2): Subscript #1 of the array CORR has 
> value 12 which is greater than the upper bound of 10
> 
> Image  PCRoutineLine 
> Source
> pw.x   016F0EF0  Unknown   Unknown  Unknown
> pw.x   00D7B085  funct_mp_set_dft_ 597 
> funct.f90
> pw.x   00D79837  funct_mp_enforce_ 723 
> funct.f90
> pw.x   00E2E054  read_pseudo_mod_m 101 
> read_pseudo.f90
> pw.x   006EA301  iosys_   1444 
> input.f90
> pw.x   004080B9  run_pwscf_ 63 
> run_pwscf.f90
> pw.x   00407FBD  MAIN__ 30 
> pwscf.f90
> pw.x   00407F1E  Unknown   Unknown  Unknown
> libc.so.6  0034F221ED1D  Unknown   Unknown  Unknown
> pw.x   00407E29  Unknown   Unknown  Unknown
> ---
> 
> 
> 'pw_uspp/uspp-hyb-g.in' fails with the error message:
> ---
> forrtl: severe (408): fort: (2): Subscript #1 of the array DSPHER has 
> value 1 which is greater than the upper bound of 0
> 
> Image  PCRoutineLine 
> Source
> pw.x   016F0EF0  Unknown   Unknown  Unknown
> pw.x   00517B8C  realus_mp_real_sp 602 
> realus.f90
> pw.x   0050D056  realus_mp_addusfo1284 
> realus.f90
> pw.x   00AFB1F6  force_us_.L   113 
> force_us.f90
> pw.x   006A3415  forces_90 
> forces.f90
> pw.x   004081B8  run_pwscf_129 
> run_pwscf.f90
> pw.x   00407FBD  MAIN__ 30 
> pwscf.f90
> pw.x   00407F1E  Unknown   Unknown  Unknown
> libc.so.6  0034F221ED1D  Unknown   Unknown  Unknown
> pw.x   00407E29  Unknown   Unknown  Unknown
> ---
> 
> 
> 'pw_vdw/vdw-ts.in' fails with the error message:
> ---
> forrtl: severe (408): fort: (2): Subscript #1 of the array UTSVDW has 
> value 5201 which is greater than the upper bound of 5200
> 
> Image  PCRoutineLine 
> Source
> pw.x   016F0EF0  Unknown   Unknown  Unknown
> pw.x   00470436  v_of_rho_  92 
> v_of_rho.f90
> pw.x   0080AE0B  potinit_  227 
> potinit.f90
> pw.x   006D98F8  init_run_  99 
> init_run.f90
> pw.x   00408111  run_pwscf_ 78 
> run_pwscf.f90
> pw.x   00407FBD  MAIN__ 30 
> pwscf.f90
> pw.x   00407F1E  Unknown   Unknown  Unknown
> libc.so.6  0034F221ED1D  Unknown   Unknown  Unknown
> pw.x   00407E29  Unknown       Unknown  Unknown
> ---
> 
> 
> 
> All messages are similar so they may have a common cause, but I'm unable 
> to tell why exactly. Any ideas?
> 
> 
> TIA,
> Fabricio
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Problem with MPI parallelization: Error in routine zsqmred

2016-09-02 Thread Filippo SPIGA
Dear Jan,

Paolo is right, you are providing us very little information to help. Please 
create a tar,gz containing:
- your make.sys
- the file "install/config.log"
- the submission script you used to run the job
- the input file
- the pseudo-potentials required to run the example
- some technical details about your workstation / server / cluster


On Sep 2, 2016, at 8:43 AM, Jan Oliver Oelerich 
<jan.oliver.oeler...@physik.uni-marburg.de> wrote:
> 
> Hi QE users,
> 
> I am trying to run QE 5.4.0 with MPI parallelization on a mid-size 
> cluster. I successfully tested the installation using 8 processes 
> distributed on 2 nodes, so communication across nodes is not a problem. 
> When I, however, run the same calculation on 64 cores, I am getting the 
> following error messages in the stdout:

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] ZGEMM or MATMUL for MPI environment

2016-09-01 Thread Filippo SPIGA
On Aug 31, 2016, at 9:33 PM, Ilya Ryabinkin <igryabin...@gmail.com> wrote:
> So, the bottom line: ZGEMM and MATMUL should give the same, right?

assuming you get all paramaters right, yes. 

However, as best practice, use always ZGEMM instead of MATMUL.

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Availability beta version of Quantum ESPRESSO v6.0

2016-08-31 Thread Filippo SPIGA
On Aug 30, 2016, at 10:22 PM, Ye Luo <xw111lu...@gmail.com> wrote:
> Maybe a check can be added to prevent users using old compilers in this case.

This can be done. Put on the list.

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] [Q-e-developers] Availability beta version of Quantum ESPRESSO v6.0

2016-08-29 Thread Filippo Spiga
Hello Ye,

On 29 Aug 2016, at 15:01, Ye Luo <xw111lu...@gmail.com> wrote:
> I just noticed that src/ELPA is added in my building command line even though 
> it is not included in the make.inc and I'm not using ELPA. The compiler 
> complains about the non-existing path.

Which compiler?

ELPA is going to become a complete separate dependency instead of being part of 
the QE package (see "archive/"). Something like ScaLAPACK. Users will have to 
download, build and link it to QE explicitly. The configure in v6.0 will handle 
this case better than now.

--
Filippo SPIGA
* Sent from my iPhone, sorry for typos *___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Anyone running on a SPARC64 architecture?

2016-08-29 Thread Filippo SPIGA
Thanks Mitsuaka,

can you run the configure and send me both the config.log and make.inc 
generated before any hand customization?

Thanks

On Aug 29, 2016, at 8:51 AM, MitsuakiKawamura <mkawam...@issp.u-tokyo.ac.jp> 
wrote:
> Dear Filippo
> 
> Hello,
> 
> I am using Quantum ESPRESSO on FX10(SPARC64IXfx) in my institute.
> I run configure script and then modify make.inc by hand.
> I attached my make.inc.
> It passes 16/17 CP tests in test-suite with MPI.
> 
> Sorry, I am not sure whether it is the best configuration in FX10.
> I hope some knowledge from other FX10/K user.
> 
> Best regards,
> 
> Mitsuaki Kawamura

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Installation error in EPW with QE-6.0-Beta version

2016-08-29 Thread Filippo SPIGA
Hello Kondaiah,

if you type "make" it tells you what you can build 


* * * * * * THIS IS A BETA RELEASE * * * * * *

to install Quantum ESPRESSO, type at the shell prompt:
  ./configure [--prefix=]
  make [-j] target

where target identifies one or multiple CORE PACKAGES:
  pw   basic code for scf, structure optimization, MD
  ph   phonon code, Gamma-only and third-order derivatives
  pwcond   ballistic conductance
  neb  code for Nudged Elastic Band method
  pp   postprocessing programs
  pwallsame as "make pw ph pp pwcond neb"
  cp   CP code: CP MD with ultrasoft pseudopotentials
  tddfpt   time dependent dft code
  gwl  GW with Lanczos chains
  ld1  utilities for pseudopotential generation
  upf  utilities for pseudopotential conversion
  xspectra X-ray core-hole spectroscopy calculations
  couple   Library interface for coupling to external codes
  gui  Graphical User Interface
  test-suite   Run semi-automated test-suite for regression testing
  all  same as "make pwall cp ld1 upf tddfpt"

where target is one of the following suite operation:
  doc  build documentation
  linkscreate links to all executables in bin/
  tar  create a tarball of the source tree
  tar-gui  create a standalone PWgui tarball from the GUI sources
  tar-qe-modes create a tarball for QE-modes (Emacs major modes for Quantum 
ESPRESSO)
  cleanremove executables and objects
  verycleanremove files produced by "configure" as well
  distcleanrevert distribution to the original status

 THIRD-PARTIES PACKAGES are not supperted in this beta release


EPW and wannier90 are not included in this beta.

I will ament the QE-FORGE message with an explicit list.

Cheers

On Aug 29, 2016, at 10:27 AM, Kondaiah Samudrala <konda.phys...@gmail.com> 
wrote:
> Dear all,
> 
> I found below error for installation of epw (make epw ) in QE-6.0-beta 
> version. can any one suggest me the way to install
> 
> /mpif90 ../wannier_prog.F90  constants.o io.o utility.o parameters.o 
> hamiltonian.o overlap.o kmesh.o disentangle.o wannierise.o plot.o transport.o 
> /home/saint/Softwares/6.0/lapack-3.2/lapack.a 
> /home/saint/Softwares/6.0/BLAS/blas.a  -o ../../wannier90.x
> parameters.o: file not recognized: File truncated
> make[4]: *** [../../wannier90.x] Error 1
> make[4]: Leaving directory `/home/saint/Softwares/6.0/wannier90-2.0.1/src/obj'
> make[3]: *** [wannier] Error 2
> make[3]: Leaving directory `/home/saint/Softwares/6.0/wannier90-2.0.1'
> make[2]: *** [w90] Error 1
> make[2]: Leaving directory `/home/saint/Softwares/6.0/install'
> make[1]: *** [w90] Error 1
> make[1]: Leaving directory `/home/saint/Softwares/6.0'
> make: *** [wannier] Error 2
> 
> PS: Also, some details/clarification on "band_plot" tool in epw update.
> 
> with best regards
> S. Appalakondaiah
> Postdoctoral scholar
> SAINT, SKKU
> South Korea

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] Anyone running on a SPARC64 architecture?

2016-08-28 Thread Filippo SPIGA
Hello,

anyone is currently running Quantum ESPRESSO on a SPARC64 architecture (e.g. K 
Computer)?

Please get in touch. Thanks!

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


[Pw_forum] Availability beta version of Quantum ESPRESSO v6.0

2016-08-28 Thread Filippo SPIGA
Dear all,

the Quantum ESPRESSO Development Team is pleased to release a beta version of 
Quantum ESPRESSO v6.0. 

We decided to disclose a beta release to collect from our user community as 
many feedback as possible and capture as many bugs as possible in advance. We 
will do our best to fix on time all isssues before the production release. The 
v6.0 is planned by end of September. The "6.beta" archive can be downloaded 
from QE-FORGE: 

http://qeforge.qe-forge.org/gf/project/q-e/frs/?action=FrsReleaseView_id=219


An important note: this is *not* a productionrelease so there may be issues and 
*not* all third-party packages are supported and available at this stage. After 
the beta period this archive will be removed.

We appreciate and value your feedback, PLEASE download and try it. We look 
forward to hearing from you.

Happy Computing

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Error compiling QE-GPU

2016-07-11 Thread Filippo Spiga
Hello Máximo,

the problem you experienced with MKL is not related to QE-GPU. As quick 
workaround, if you remove "-D__DFTI" and add "-D__FFTW" you will be able to 
compile without problem and possibly run.

I need to warn you, if your GPU is a NVIDIA GF110GL then you will achieve zero 
speed-up (maybe a potential slow down!). QE-GPU is design to run on NVIDIA 
TESLA products (K20, K40, K80 and future P100 Pascal generation). Gaming GPU or 
laptop GPU are not officially supported.

HTH

On 10 Jul 2016, at 08:05, Máximo Ramírez <aquiles...@gmail.com> wrote:
> Tech details:
> NVIDIA GF110GL
> AMD
> CentOS release 6.7

--
Filippo SPIGA
* Sent from my iPhone, sorry for typos *

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] Applications are now open for Master in High Performance Computing -- academic year 2016-2017

2016-06-14 Thread Filippo SPIGA
Dear everybody,

I would like to bring to your attention that the registrations are officially 
open for the 2016/2017 edition of the SISSA/ICTP Master in High Performance 
Computing (http://mhpc.it). Applications can be completed online until July 6, 
2016 at http://mhpc.it/how-apply 

Now in its third edition, this exclusive Master’s program will select 15 
high-profile participants to join the programme. Experts from academia and 
leading international companies will prepare students for the world of High 
Performance Computing (HPC). The number of sponsors, and the fact that the 
program is almost entirely financed by them, reflects the growing interest of 
companies and research organizations in finding trained professionals in this 
growing sector.

The Quantum ESPRESSO Foundation is among those organizations which will sponsor 
one scholarship for 12 months. Several projects are currently avaliable to the 
successful applicants, check http://mhpc.it/ for more information!

Please fee free to forward the message to your colleagues and peers. Thank you.

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] mpi error using pw.x

2016-05-16 Thread Filippo SPIGA
On May 16, 2016, at 11:12 PM, Chong Wang <ch-w...@outlook.com> wrote:
> 1. With intel parallel studio 2016 update3, Errors in my original post 
> persists.

We had little compiler issue with Intel 16 in the past, I would not be 
surprised is somwthing else is broken. Which exact version of Intel IFORT you 
have?

Please run "ifort --version"

Thanks

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Ifort version

2016-05-12 Thread Filippo SPIGA
Hello Alexander, 

On May 12, 2016, at 12:29 PM, Alexander Martins <alex.msilv...@gmail.com> wrote:
> I can't get a successful compilation of QE 5.3.0 with ifort 12.0.5? Should I 
> upgrade ifort?

can you tell us the exact version of ifort (ifort --version) and can you give 
us a bit more details about the error you get?

Upgrade the compiler is a solution and not a bad idea after all. But I am still 
curious to see and understand the error.

Regards

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] ATOMIC_POSISTIONS nonexistent when using space groups

2016-05-12 Thread Filippo SPIGA
On May 12, 2016, at 1:31 PM, Gunnar Palsson <gunnar.k...@gmail.com> wrote:
> Is there a workaround for the error I’m getting?

I can pass you a couple of files to swap. Personally I believe it is just a 
curiosity exercise and nothing great will come out from it but if you insist it 
isfine for me :-)


> Also do you know if there is a plan to support the Maxwell architecture in 
> the future?

No, the focus will be supporting QE-GPU on Pascal. Future NVIDIA gaming cards 
based on Pascal architecture will have a huge single precision perfemance (> 
5TFlops per card)  and a decent double precision one as well (~1.4 TFlops). All 
of this needs to be assessed and tested but it is going to be substantially 
better than Maxwell-based cards for codes that need double precision.

Check on google about GTC 1080 for more details. I look forward to try it 
myself!


> I’m thinking whether the performance might be improved by “switching the GPU 
> to TCC mode” as explained in the link below. It seems like TCC mode is 
> supported for QUADRO M5000. 
> http://arrayfire.com/explaining-fp64-performance-on-gpus/

Never experimented with TCC mode. I suspect it is something that is possible to 
use on GEFORCE and QUADRO products. On TESLA product line double precision 
performance is sadisfactory per se.


--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] [QE-GPU] Maxwell architecture

2016-05-12 Thread Filippo SPIGA
Hello Gunnar,

On May 11, 2016, at 4:15 PM, Gunnar Palsson <gunnar.k...@gmail.com> wrote:
> My question is: Is there a way to compile QE-GPU with the Maxwell 
> architecture and if so how? I read on the forum that unfortunately the 
> Maxwell architecture does not do double precision very well.

Maxell is not supported, you can force the compilation but as you pointed 
already in your email double precision is going to be bad.


> Is it a prohibitive loss of precision if one restricts the calculations to 
> single precision?

Well... for the GPU implementation you simply cannot switch precision 
on-demand. QE-GPU reflects the QE implementation, if the original code si 
double precision than the GPU code is double precision. QE is all double 
precision so the switch cannot be done.

I cannot comment on what it is going to happen if you switch everywhere from 
double to single (for some part it may work, for other may not), domain experts 
in the physics can give a proper answer to this. from an implementation point 
of you, again, cannot be done.

HTH

Cheers

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Version 5.4.0 of Quantum ESPRESSO is available for download

2016-05-02 Thread Filippo SPIGA
On Apr 28, 2016, at 4:42 PM, Fabricio Cannini <fcann...@gmail.com> wrote:
> On 28-04-2016 10:00, Filippo SPIGA wrote:
>> Hello Fabricio,
>> 
>> On Apr 25, 2016, at 10:46 PM, Fabricio Cannini <fcann...@gmail.com> wrote:
>>> I'm not sseing 'QE-GPU-5.4.0.tar.gz' on the qe-forge link, and no
>>> mention of it in 'Doc/release-notes'. Am I missing something ?
>> 
>> I will upload it tomorrow afternoon.
> 
> Thanks a lot!

It took a more time than expected to align the code with latest changes in the 
code structure but it is done. My hope is to merge the plugin with the main 
code going forward to avoid last miniutes changes to be applied, I have the 6.0 
version as target. 

If you do "make gpu" espresso 5.4.0 will now doenload QE-GPU-5.4.0 for you and 
then you can run configure like you did before.

HTH

Cheers

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Version 5.4.0 of Quantum ESPRESSO is available for download

2016-04-25 Thread Filippo SPIGA
Hello Ye,

On Apr 25, 2016, at 9:29 PM, Ye Luo <xw111lu...@gmail.com> wrote:
> Thanks for making the new release available so quickly.
> I just noticed that the following line was listed in the change log but I 
> don't have access to the svn.
>   * New configure wasn't working properly for some Macintosh due to a
> missing line (commit 11976) and on BG (commit 12333)
> 
> Could you explain me about what was not correct on BG and what critical has 
> been changed (commit 12333)?
> I built qe 5.3.0 on Mira/Cetus at Argonne without problems by forcing 
> ARCH=ppc64-bgq.
> I also tried to configure 5.4.0 and compared the generated make.sys. I didn't 
> see a significant change.
> Thank you so much.


I tried on a lueGene/Q on Friday after a long time and I dicovered there was an 
issue how MANUAL_DFLAGS was handled (see : 
https://github.com/QEF/q-e/commit/7f3f4fb7d12f01d4f9cd90e1291eba62b59fe5d3 ). 
Everything else was working fine on 5.3.0 and it should continue to work as 
well on 5.4.0.

The number of BlueGene/Q available worldwide is shrinking and so our ability to 
test the build and configure system on that particular architecture. We rely on 
you people like you to give us feedback is something does not work properly :-)

Cheers 

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


[Pw_forum] Version 5.4.0 of Quantum ESPRESSO is available for download

2016-04-25 Thread Filippo SPIGA
Dear everybody,

I am pleased to announce you that version 5.4.0 of Quantum ESPRESSO (SVN 
revision r12350) is now available for download.

You can find all related packages published on the QE-FORGE website at this 
link:
http://qe-forge.org/gf/project/q-e/frs/?action=FrsReleaseView_id=211

Or download directly espresso-5.4.0.tar.gz here:
http://qe-forge.org/gf/download/frsrelease/211/968/espresso-5.4.0.tar.gz

Please refer to the file "Doc/release-notes" for additional details about the 
release (new features, fixes, incompatibilities, known bugs). For any new bug, 
suspicious misbehavior or any proven reproducible wrong result please get in 
contact with the developers by writing directly to q-e-develop...@qe-forge.org .

Happy Computing!

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] more patches

2016-04-24 Thread Filippo SPIGA
Hello David,

we will take into account this issue but not for this release. For the next one 
scheduled in September will try to address this problem with OSX 
case-insensistive file-systems. We will do our own testing to make sure nothing 
breaks.

Cheers

On Apr 24, 2016, at 7:32 PM, David Strubbe <dstru...@berkeley.edu> wrote:
> My point about cpp is: Octopus (and some other codes) have been using this 
> approach for many years on a variety of platforms. So, I don't see any reason 
> it would cause trouble for any compilers. The issue for OSX I mention is 
> related to the filesystem and is independent of the compiler.

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] [QE-GPU] QE-GPU-5.3.0 compilation issues (UNCLASSIFIED)

2016-02-17 Thread Filippo SPIGA
Dear James,

did you make sure to clean everything before move from CUDA 5.5 to CUDA 6.5?

Meaning:

$ make -f Makefile.gpu distclean

and re-run the configure from GPU/ folder. What happen if you disable MAGMA? 
Does it compile?

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."
On Feb 17, 2016, at 7:11 PM, Ianni, James C CTR USARMY RDECOM ARL (US) 
<james.c.ianni@mail.mil> wrote:
> CLASSIFICATION: UNCLASSIFIED
> 
> 
> Hi,
> 
>   I'm trying to compile QE-GPU-5.3.0 on a 64-bit SUSE Linux Sever 11.3 Dell 
> node. I'm compiling with gcc/4.8.1 with cudatoolkit/6.5.14-1.0502.9613.6.1. I 
> have successfully compiled QE-GPU-5.3.0 with 
> cudatoolkit/5.5.22-1.0502.7944.3.1 before, but that version does not run with 
> the newer cudatoolkit installed.
> The compilation problems appears to be missing BLAS libraries (see output 
> from below) that MAGMA is trying to call. Could it be that the cudatoolkit 
> wasn't fully installed?
> ===
> 
> Here is my configure:
> 
> export ARCH x86_64
> export F77="gfortran -fPIC"
> export FC="gfortran -fPIC"
> export CC="gcc -fPIC"
> 
> ./configure  --disable-parallel --enable-openmp  \
>  --enable-cuda --with-gpu-arch=sm_35 \
>  --with-cuda-dir=/opt/cray/nvidia/default/lib64  \
>  --with-magma --with-phigemm 
> --prefix=/usr/cta/unsupported/qe/5.3.0.gpu/espresso-5.3.0
> 
> 
> <..SNIP..>
> ..
> gfortran -fPIC -g -fopenmp -o pw-gpu.x \
> pwscf.o  ../../PW/src/libpw.a libpwgpu.a ../../Modules/libqemod.a 
> ../../FFTXlib/libqefft.a ../Modules/libqemodgpu.a ../../flib/ptools.a 
> ../../flib/flib.a ../../clib/clib.a ../../iotk/src/libiotk.a  
> /usr/cta/unsupported/qe/5.3.0.gpu/espresso-5.3.0/GPU/..//qe-magma/lib/libmagma.a
>   
> /usr/cta/unsupported/qe/5.3.0.gpu/espresso-5.3.0/GPU/..//lapack-3.2/lapack.a  
> /usr/cta/unsupported/qe/5.3.0.gpu/espresso-5.3.0/GPU/..//phiGEMM/lib/libphigemm.a
>   /usr/cta/unsupported/qe/5.3.0.gpu/espresso-5.3.0/GPU/..//BLAS/blas.a   
> -L/opt/nvidia/cudatoolkit6.5/6.5.14-1.0502.9613.6.1/lib64 -lcublas  -lcufft 
> -lcudart
> /usr/cta/unsupported/qe/5.3.0.gpu/espresso-5.3.0/GPU/..//qe-magma/lib/libmagma.a(zgeqp3.o):
>  In function `magma_zgeqp3':
> zgeqp3.cpp:(.text+0x33f): undefined reference to `cblas_dznrm2'
> /usr/cta/unsupported/qe/5.3.0.gpu/espresso-5.3.0/GPU/..//qe-magma/lib/libmagma.a(zlaqps.o):
>  In function `magma_zlaqps':
> zlaqps.cpp:(.text+0x207): undefined reference to `cblas_idamax'
> zlaqps.cpp:(.text+0x97c): undefined reference to `cblas_dznrm2'
> zlaqps.cpp:(.text+0xa15): undefined reference to `cblas_dznrm2'
> ..
> <..SNIP..>
> __
> Dr. James C. Ianni
> Applications Engineer
> Lockheed-Martin / Contractor
> ARL DoD Supercomputing Resource Center 
> Aberdeen Proving Ground, MD 21005
> Email:  james.c.ianni@mail.mil
> 
> 
> 
> CLASSIFICATION: UNCLASSIFIED
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Geometry optimization on QE530-GPU with memory allocation error?

2016-02-15 Thread Filippo Spiga
Rolly,

I assume you use some sort of script to submit or run your calculation. Do not 
run for 500K seconds, split this run in a sequence of short ones and keep the 
max_time within 12h~24h. In this way you always have a chance, if something 
goes wrong in one run of your long relaxation calculation, to resume safely 
without need to recompute too much.

This suggestion is driven by common sense, not because of how QE or QE-GPU work.

HTH

--
Filippo SPIGA
* Sent from my iPhone, sorry for typos *

> On 16 Feb 2016, at 07:01, Rolly Ng <roll...@gmail.com> wrote:
> 
> Dear Paolo,
>  
> Thank you for the clarification, I will give it a trial.
>  
> Regards,
> Rolly
>  
> PhD, Research Fellow,
> Department of Physics and Materials Science,
> City University of Hong Kong
> Tel: +852 3442 4000
> Fax:+852 3442 0538
>  
> From: pw_forum-boun...@pwscf.org [mailto:pw_forum-boun...@pwscf.org] On 
> Behalf Of Paolo Giannozzi
> Sent: Tuesday, February 16, 2016 2:51 PM
> To: PWSCF Forum
> Subject: Re: [Pw_forum] Geometry optimization on QE530-GPU with memory 
> allocation error?
>  
> You do not need to update atomic coordinates: the code will read and use the 
> latest set of coordinates if you restart from a previous run (after a clean 
> stop)
> 
> Paolo
>  
> On Tue, Feb 16, 2016 at 6:39 AM, Rolly Ng <roll...@gmail.com> wrote:
> Dear Filippo,
>  
> Thanks for the quick tip.
>  
> I would like to know the correct method of stop-restart a geometry 
> optimization.
>  
> 1)  Initially, add  max_seconds = 50 to the  section
> 
> 2)  Add restart_mode = from_scractch to the  section
> 
> 3)  Run pw-gpu.x and wait for the run to stop after 50 seconds
> 
> 4)  Modify restart_mode = restart to the  section
> 
> 5)  Rerun pw-gpu.x and wait for the run to stop after 50 seconds
> 
>  
> What I am not sure is the coordinates of atoms for restarting the 
> calculation? Since I am doing  geometry optimization, the positions of the 
> atoms does change and do I need to update the latest coordinates at the 
> 50 seconds manually? And how can I do that?
>  
> Thanks,
> Rolly
>  
> PhD, Research Fellow,
> Department of Physics and Materials Science,
> City University of Hong Kong
> Tel: +852 3442 4000
> Fax:+852 3442 0538
>  
> From: pw_forum-boun...@pwscf.org [mailto:pw_forum-boun...@pwscf.org] On 
> Behalf Of Filippo Spiga
> Sent: Tuesday, February 16, 2016 12:20 PM
> To: PWSCF Forum
> Subject: Re: [Pw_forum] Geometry optimization on QE530-GPU with memory 
> allocation error?
>  
> Dear Rolly,
>  
> sorry to hear about your problem, I imagine the frustration of losing so much 
> time and being unable to recover because of an error happened in the middle 
> of a SCF step. It is hard to guess what went wrong at that point, especially 
> after the calculation run continuously on multiple GPU for almost 7 days 
> without stop.
>  
> Just a consideration, valid with or without GPU: unless not possible, _never_ 
> run continuously for so long. It is a bad idea for multiple reasons. Always 
> safely checkpoit/restart your calculation more often.
>  
> Cheers
>  
> --
> Filippo SPIGA
> * Sent from my iPhone, sorry for typos *
> 
> On 16 Feb 2016, at 04:01, Rolly Ng <roll...@gmail.com> wrote:
> 
> Dear Filippo and QE-GPU users,
>  
> I am running a geometry optimization and the system contains 128 atoms. It 
> runs fine but until the time spent reaches 590,000 seconds it stops with the 
> error, and the job fails to complete L and I have this error 3 times for 3 
> different cases.
>  
> “Error in memory allocation, program will be terminated (2) !!! Bye…”
>  
> I can confirm the error only appear after running for more than 560,000 
> seconds, so all the previous effort was wasted L if I cannot restart the 
> optimization L.
>  
> I have not seen such problem with QE520-GPU or may be my previous runs did 
> not last for so long.
>  
> Could you please check my input file? Thank you!
>  
> 
> calculation = 'relax' ,
> outdir = '/home/zgdeng/Rolly/TiNSurf200',
> pseudo_dir = '/home/zgdeng/SSSP_acc_PBE' ,
>  
> prefix = 'TiNSurf200+Biotin',
> verbosity = 'low' ,
>etot_conv_thr = 1.0D-3 ,
>forc_conv_thr = 1.0D-2 ,
> nstep = 100 ,
> tstress = .false. ,
> tprnfor = .false. ,
> /
> 
> ibrav = 14,
> celldm(1) = 22.9288029598d0, celldm(2)=1.2990423130d0, 
> celldm(3)=5.2512156527d0,
> cel

Re: [Pw_forum] Geometry optimization on QE530-GPU with memory allocation error?

2016-02-15 Thread Filippo Spiga
Dear Rolly,

sorry to hear about your problem, I imagine the frustration of losing so much 
time and being unable to recover because of an error happened in the middle of 
a SCF step. It is hard to guess what went wrong at that point, especially after 
the calculation run continuously on multiple GPU for almost 7 days without stop.

Just a consideration, valid with or without GPU: unless not possible, _never_ 
run continuously for so long. It is a bad idea for multiple reasons. Always 
safely checkpoit/restart your calculation more often.

Cheers

--
Filippo SPIGA
* Sent from my iPhone, sorry for typos *

> On 16 Feb 2016, at 04:01, Rolly Ng <roll...@gmail.com> wrote:
> 
> Dear Filippo and QE-GPU users,
>  
> I am running a geometry optimization and the system contains 128 atoms. It 
> runs fine but until the time spent reaches 590,000 seconds it stops with the 
> error, and the job fails to complete L and I have this error 3 times for 3 
> different cases.
>  
> “Error in memory allocation, program will be terminated (2) !!! Bye…”
>  
> I can confirm the error only appear after running for more than 560,000 
> seconds, so all the previous effort was wasted L if I cannot restart the 
> optimization L.
>  
> I have not seen such problem with QE520-GPU or may be my previous runs did 
> not last for so long.
>  
> Could you please check my input file? Thank you!
>  
> 
> calculation = 'relax' ,
> outdir = '/home/zgdeng/Rolly/TiNSurf200',
> pseudo_dir = '/home/zgdeng/SSSP_acc_PBE' ,
>  
> prefix = 'TiNSurf200+Biotin',
> verbosity = 'low' ,
>etot_conv_thr = 1.0D-3 ,
>forc_conv_thr = 1.0D-2 ,
> nstep = 100 ,
> tstress = .false. ,
> tprnfor = .false. ,
> /
> 
> ibrav = 14,
> celldm(1) = 22.9288029598d0, celldm(2)=1.2990423130d0, 
> celldm(3)=5.2512156527d0,
> celldm(4) = 0.00d0, celldm(5)=0.00d0, 
> celldm(6)=0.00d0,
> nat = 128,
> ntyp = 6,
> ecutwfc = 30d0 ,
> ecutrho = 240d0 ,
> nosym = .true. ,
> nbnd = 600,
>input_dft = 'PBE' ,
> occupations = 'smearing' ,
> degauss = 0.015d0 ,
>smearing = 'gaussian' ,
> /
> 
> electron_maxstep = 1000,
> conv_thr = 1d-06 ,
> mixing_mode = 'local-TF' ,
> mixing_beta = 0.300d0 ,
> diagonalization = 'david' ,
> /
>   
>ion_dynamics = 'bfgs' ,
>upscale = 100.D0 ,
>bfgs_ndim = 3 ,
> /
> ATOMIC_SPECIES
> C 12.010700d0 C_pbe_v1.2.uspp.F.UPF
> H 1.007940d0 H.pbe-rrkjus_psl.0.1.UPF
> N 14.006700d0 N.pbe.theos.UPF
> O 15.999400d0 O.pbe-n-kjpaw_psl.0.1.UPF
> S 32.065000d0 S_pbe_v1.2.uspp.F.UPF
> Ti 47.867000d0 ti_pbe_v1.4.uspp.F.UPF
> ATOMIC_POSITIONS {alat}
> Ti   0.00d0   0.00d0   0.1021361444d0   0   0 
>   0
> Ti   0.125000d0   0.2165113823d0   0.1021361444d0   0   0   0
> Ti   0.00d0   0.1443365914d0   0.3062508969d0   1   1   1
> Ti   0.125000d0   0.3608479737d0   0.3062508969d0   1   1   1
> N0.00d0   0.1443365914d0   0.0001050243d0   0   0   0
> N0.125000d0   0.3608479737d0   0.0001050243d0   0   0   0
> N0.125000d0   0.0721747909d0   0.2042197767d0   1   1   1
> N0.00d0   0.2886731828d0   0.2042197767d0   1   1   1
> Ti   0.25d0   0.00d0   0.1021361444d0   0   0   0
> Ti   0.375000d0   0.2165113823d0   0.1021361444d0   0   0 
>   0
> Ti   0.25d0   0.1443365914d0   0.3062508969d0   1   1 
>   1
> Ti   0.375000d0   0.3608479737d0   0.3062508969d0   1   1 
>   1
> N0.25d0   0.1443365914d0   0.0001050243d0   0   0 
>   0
> N0.375000d0   0.3608479737d0   0.0001050243d0   0   0 
>   0
> N0.375000d0   0.0721747909d0   0.2042197767d0   1   1 
>   1
> N0.25d0   0.2886731828d0   0.2042197767d0   1   1 
>   1
> Ti   0.50d0   0.00d0   0.1021361444d0   0   0 
>   0
> Ti   0.625000d0   0.2165113823d0   0.1021361444d0   0   0 
>   0
> Ti   0.50d0   0.1443365914d0   0.3062508969d0   1   1 
>   1
> Ti   0.625000d0 

Re: [Pw_forum] quantum-espresso RPMS in Fedora/EPEL

2016-02-04 Thread Filippo SPIGA
On Jan 26, 2016, at 1:42 PM, Marcin Dulak <marcin.du...@gmail.com> wrote:
> 3. it would be convenient if the pseudos used by the test-suite are provided 
> as a separate tarball on downloads.

test-suite is a separate tar.gz in 5.3.0. it is a recently introduced 
"feature", still in progress. 


> 4. it seems like parallel make did not work (make -j), when trying the 5.2.1 
> version.
> Please help to clarify this.

Well... if something is broken in the previous 5.2.1 the only thing we can do 
is make sure in the future versions the problem is solved.


> 5. a minor detail in test-suite/run-*.sh scripts: they should use source 
> instead of include:
> sed -i "s#include #source #" test-suite/run-cp.sh
> sed -i "s#include #source #" test-suite/run-pw.sh

But if I do that then test-suite stops working. I need to change as well what 
is in the file ENVIRONMENT. At the moment it is safe to keep include, it 
works...


> 6. there are some tests failures, e.g. on EPEL6 x86_64, with openmpi 171 out 
> of 197 tests passed (2 unknown):
> https://kojipkgs.fedoraproject.org//work/tasks/8858/12648858/build.log
> Search for the first occurence of "171" in this file.
> Some of these seem to be related to vdw part, maybe the files the presence of
> which is assumed for the tests are not created properly?

Those 2 "unknown" are due to the fact that the test cases does not generate any 
specific output that can be compared with previously stored reference outputs. 
The script which extract numerical quantities needs to be extended and it is 
something in the TODO list.

HTH

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] System memory issue using --with-pinned-mem on QE-GPU

2016-01-21 Thread Filippo SPIGA
On Jan 21, 2016, at 12:35 AM, Rolly Ng <roll...@gmail.com> wrote:
> My server has total of 144GB RAM and 8x C2050. These cards have 3GB memory 
> each, so there are 24 GB of video RAM. If I compile px-gpu.x with the option 
> "--with-pinned-mem", does it lock 24GB of system memory out of 144 GB? and I 
> will have 120 GB system RAM left?

Not exaclty, it "locks" a portion of the CPU memory but it depends by the input 
and not by the amount of memory on the GPU. It should be less but it depends by 
the input case (since specific data structures are "pinned", others are not). 
By default, very little data structures that are transferred to the GPU are 
already allocated as page-locked. 

Give it a try, if you calculation starts a complete few SCF iterations then 
more likely will run until the very end. You may experience a bit (but a very 
little) increase in performance. But if too much memory is pinned, the system 
can get slow. It is not intuitive why but it can happen.

HTH

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] CUDA Support.

2016-01-21 Thread Filippo SPIGA
On Jan 21, 2016, at 6:18 PM, Mohammed Ghadiyali <m7...@live.co.uk> wrote:
> I have few questions regarding CUDA support of Quantum ESPRESSO, as we are 
> planning to procure a server. And one of the configurations we are looking 
> into has dual Intel Xeon cpu's and four nvidia's graphic cards, similar to 
> this:
> http://www.thinkmate.com/system/gpx-xt4-2160v3-titan
> 
> 1. Does it support the consumer graphics cards like GTX 980ti or GTX TitanX 
> (as K40 and K80 basically cost like an arm and leg, we could buy four 
> consumer graphics cards for the price of one)?

No, QE-GPU is not designed to use gaming cards. you need a NVIDIA TESLA (K20, 
K40 or K80). The code will work on NVIDIA GTX TITAN but performance will be 
crap.


> 3. Or we should go for complete CPU solution?

If you do not have enough funding to but a NVIDIA TESLA with proper 
double-precision support or if you are planning to use the server _only_ for QE 
then I suggest you to go for the CPU-only solution.


> 4. I know that CUDA is nvida's platform but is it possible to run Quantum 
> ESPRESSO on AMD's platform (i.e on AMD APU series processors)?

No.

HTH

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] compiling QE: fftw problems

2016-01-19 Thread Filippo SPIGA
Dear Denis,

are you using OpenMPI or Intel MPI?

If Open MPI, try also 

./configure MPIF90=mpiifort ...


If Intel MPI (I assume this is your case), try instead 

./configure MPIF90=mpiifort ... --with-scalapack=intel


In the meanwhile please send me personally (not to the mailing-list!)  to my 
email the "install/config.log" file and your make.sys.

Regards

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."

On Jan 20, 2016, at 12:20 AM, Denis E. Zavelev <metal...@mail.ru> wrote:
> Hello!
> 
> I am trying to compile QE on JSC RAS cluster. As I have user permissions, I 
> can install any programs only locally.
> Cluster works under Linux. We have intel compilers (mpif90, icc, ifort) and 
> MKL installed on cluster. No FFTW libraries are installed though even MKL 
> ones (I have to build them locally). 
> 
> I've downloaded espresso 5.3.0 from the site. 
> Configure script finished successfully. But then I got 2 warnings and 
> subsequent error message when compiling internal FFT.
> 
> I decided to use some other FFT libs.
> So I've downloaded FFTW3 from its site, successfully built, tested and 
> installed it. But the same error message. 
> 
> Libraries found by configure script:
>  BLAS_LIBS=  -lmkl_intel_lp64  -lmkl_sequential -lmkl_core
>  LAPACK_LIBS=
>  SCALAPACK_LIBS=-lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64
>  FFT_LIBS= -lfftw3 
> 
> Here's the output:
> 
> 
> bash-3.2$ make pw
> make: Warning: File `make.sys' has modification time 0.47 s in the future
> test -d bin || mkdir bin
> ( cd FFTXlib ; make TLDEPS= all || exit 1 )
> make[1]: Entering directory 
> `/nethome/metalian/espresso/espresso-5.3.0/FFTXlib'
> make[1]: Warning: File `../make.sys' has modification time 0.46 s in the 
> future
> mpif90 -O2 -assume byterecl -g -traceback -par-report0 -vec-report0 -nomodule 
> -fpp -D__INTEL -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK   -I../include 
> -I../iotk/src -I. -c fft_types.f90
> mpif90 -O2 -assume byterecl -g -traceback -par-report0 -vec-report0 -nomodule 
> -fpp -D__INTEL -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK   -I../include 
> -I../iotk/src -I. -c scatter_mod.f90
> icc -O3 -D__INTEL -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK  -I../include  -c 
> fftw.c
> fftw.c(27449): warning #188: enumerated type mixed with another type
>  EXPECT_INT(dir);
>  ^
> 
> fftw.c(27450): warning #188: enumerated type mixed with another type
>  EXPECT_INT(type);
>  ^
> 
> mpif90 -O2 -assume byterecl -g -traceback -par-report0 -vec-report0 -nomodule 
> -fpp -D__INTEL -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK   -I../include 
> -I../iotk/src -I. -c fft_scalar.f90
> mpif90 -O2 -assume byterecl -g -traceback -par-report0 -vec-report0 -nomodule 
> -fpp -D__INTEL -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK   -I../include 
> -I../iotk/src -I. -c fft_parallel.f90
> mpif90 -O2 -assume byterecl -g -traceback -par-report0 -vec-report0 -nomodule 
> -fpp -D__INTEL -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK   -I../include 
> -I../iotk/src -I. -c fft_smallbox.f90
> mpif90 -O2 -assume byterecl -g -traceback -par-report0 -vec-report0 -nomodule 
> -fpp -D__INTEL -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK   -I../include 
> -I../iotk/src -I. -c fft_interfaces.f90
> mpif90 -O2 -assume byterecl -g -traceback -par-report0 -vec-report0 -nomodule 
> -fpp -D__INTEL -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK   -I../include 
> -I../iotk/src -I. -c stick_base.f90
> stick_base.f90(169): error #6404: This name does not have a type, and must 
> have an explicit type.   [MPI_IN_PLACE]
>  CALL MPI_ALLREDUCE(MPI_IN_PLACE, st, SIZE(st), MPI_INTEGER, MPI_SUM, 
> comm, ierr)
> -^
> compilation aborted for stick_base.f90 (code 1)
> make[1]: *** [stick_base.o] Error 1
> make[1]: Leaving directory `/nethome/metalian/espresso/espresso-5.3.0/FFTXlib'
> make: *** [libfft] Error 1
> 
> 
> I've also tries espresso 5.2.1. Configure works the same way, but compilation 
> also fails though not so fast, it ends on the following:
> 
> fft_scalar.f90(69): #error: can't find include file: fftw3.f
> make[1]: *** [fft_scalar.o] Error 1
> make[1]: Leaving directory `/nethome/metalian/espresso/espresso-5.2.1/Modules'
> make: *** [mods] Error 1
> 
> This is strange beca

Re: [Pw_forum] Version 5.3.0 of Quantum ESPRESSO is available for download

2016-01-15 Thread Filippo SPIGA
Dear everybody,

Windows executables for QE 5.3.0 (both serial and parallel for 32 and 64 bits), 
courtesy of Axel Kohlmeyer,  are now available on QE-FORGE at the same address 
below.

Best Regards


On Jan 11, 2016, at 5:23 PM, Filippo SPIGA <filippo.sp...@quantum-espresso.org> 
wrote:
> 
> I am pleased to announce you that Version 5.3.0 of Quantum ESPRESSO (SVN 
> revision 11974) is now available for download. 
> 
> You can find all related packages published on the QE-FORGE website at this 
> link: 
> http://qe-forge.org/gf/project/q-e/frs/?action=FrsReleaseView_id=204
> 
> Or download directly espresso-5.3.0.tar.gz here: 
> http://qe-forge.org/gf/download/frsrelease/204/912/espresso-5.3.0.tar.gz
> 
> Please refer to the file "Doc/release-notes" for additional details about the 
> release (new features, fixes, incompatibilities, known bugs). For any new 
> bug, suspicious misbehavior or any proven reproducible wrong result please 
> get in contact with the developers by writing directly to 
> q-e-develop...@qe-forge.org .
> 
> Happy Computing!

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


[Pw_forum] Version 5.3.0 of Quantum ESPRESSO is available for download

2016-01-11 Thread Filippo SPIGA
Dear everybody,

I am pleased to announce you that Version 5.3.0 of Quantum ESPRESSO (SVN 
revision 11974) is now available for download. 

You can find all related packages published on the QE-FORGE website at this 
link: 
http://qe-forge.org/gf/project/q-e/frs/?action=FrsReleaseView_id=204

Or download directly espresso-5.3.0.tar.gz here: 
http://qe-forge.org/gf/download/frsrelease/204/912/espresso-5.3.0.tar.gz

Please refer to the file "Doc/release-notes" for additional details about the 
release (new features, fixes, incompatibilities, known bugs). For any new bug, 
suspicious misbehavior or any proven reproducible wrong result please get in 
contact with the developers by writing directly to q-e-develop...@qe-forge.org .

Happy Computing!

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] QE-GPU with Nvidia K80 failed

2016-01-09 Thread Filippo Spiga
I identified where the problem comes from, I can apply a patch to the next 
QE-GPU release that will be out this weekend.

I will send you a separate file for QE 14.10.

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


> On Jan 9, 2016, at 4:17 PM, Rolly Ng <roll...@gmail.com> wrote:
> 
> Hello Filippo,
> 
> Just like to confirm which version of driver should I installed for K80 
> with CUDA 6.5?
> 
> With C2050, I have 340.65 and it works.
> 
> Not sure about K80? Thank you!
> 
> Regards,
> Rolly
> 
> On 01/09/2016 06:00 PM, Filippo Spiga wrote:
>> Hello Rolly,
>> 
>> Please try to use "--with-gpu-arch=sm_35", compute capability 3.5 is 
>> supported for K80. I do not recall I added explicit support for 3.7 in 
>> QE-GPU. From an optimization perspective, I can guarantee there is nothing 
>> in the code that leverage K80 better than any other NVIDIA GPU Kepler 
>> architecture.
>> 
>> Best suggestion is to move to QE 5.3.0 and get the QE-GPU bundled with it. 
>> Updates will arrive with the next release as well since now the two are 
>> aligned. tar.gz and announcement will made during the weekend.
>> 
>> Cheers
>> 
>> --
>> Mr. Filippo SPIGA, M.Sc.
>> Quantum ESPRESSO Foundation
>> http://www.quantum-espresso.org ~ skype: filippo.spiga
>> 
>> 
>>> On Jan 9, 2016, at 9:28 AM, Rolly Ng <roll...@gmail.com> wrote:
>>> 
>>> Dear QE-GPU developers,
>>> 
>>> I am testing the latest NV K80 for QE-GPU, and the test bed has 4x K80,
>>> total of 8 GPU cores.
>>> 
>>> I was successful using QE-GPU on C2050 with sm_20, but the K80 comes
>>> with sm_37.
>>> 
>>> I have cuda-6.5 installed and I am using QEv5.2.0 + QE-GPUv14.10.0
>>> complied okay. However, when I run the pw-gpu.x, it failed with
>>> *** ERROR *** something went wrong inside query_gpu_specs! (rank 0) ...
>>> (rank 7)
>>> 
>>> Should I upgrade to cuda 7.0 or 7.5 and QEv5.3.0 + QE-GPUv5.3?
>>> 
>>> Thank you,
>>> Rolly
>>> 
>>> -- 
>>> PhD. Research Fellow,
>>> Dept. of Physics & Materials Science,
>>> City University of Hong Kong
>>> Tel: +852 3442 4000
>>> Fax: +852 3442 0538
>>> 
>>> ___
>>> Pw_forum mailing list
>>> Pw_forum@pwscf.org
>>> http://pwscf.org/mailman/listinfo/pw_forum
>> 
>> ___
>> Pw_forum mailing list
>> Pw_forum@pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
> 
> -- 
> PhD. Research Fellow,
> Dept. of Physics & Materials Science,
> City University of Hong Kong
> Tel: +852 3442 4000
> Fax: +852 3442 0538
> 
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] QE-GPU with Nvidia K80 failed

2016-01-09 Thread Filippo Spiga
Hello Rolly,

Please try to use "--with-gpu-arch=sm_35", compute capability 3.5 is supported 
for K80. I do not recall I added explicit support for 3.7 in QE-GPU. From an 
optimization perspective, I can guarantee there is nothing in the code that 
leverage K80 better than any other NVIDIA GPU Kepler architecture.

Best suggestion is to move to QE 5.3.0 and get the QE-GPU bundled with it. 
Updates will arrive with the next release as well since now the two are 
aligned. tar.gz and announcement will made during the weekend.

Cheers

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://www.quantum-espresso.org ~ skype: filippo.spiga


> On Jan 9, 2016, at 9:28 AM, Rolly Ng <roll...@gmail.com> wrote:
> 
> Dear QE-GPU developers,
> 
> I am testing the latest NV K80 for QE-GPU, and the test bed has 4x K80, 
> total of 8 GPU cores.
> 
> I was successful using QE-GPU on C2050 with sm_20, but the K80 comes 
> with sm_37.
> 
> I have cuda-6.5 installed and I am using QEv5.2.0 + QE-GPUv14.10.0 
> complied okay. However, when I run the pw-gpu.x, it failed with
> *** ERROR *** something went wrong inside query_gpu_specs! (rank 0) ... 
> (rank 7)
> 
> Should I upgrade to cuda 7.0 or 7.5 and QEv5.3.0 + QE-GPUv5.3?
> 
> Thank you,
> Rolly
> 
> -- 
> PhD. Research Fellow,
> Dept. of Physics & Materials Science,
> City University of Hong Kong
> Tel: +852 3442 4000
> Fax: +852 3442 0538
> 
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Using solvent on QE-GPU?

2015-12-01 Thread Filippo Spiga
On Nov 30, 2015, at 11:29 AM, Oliviero Andreussi <oliviero.andreu...@usi.ch> 
wrote:
> I am just waiting for the next official release of QE to come out (this 
> December, as far as I know).  

Correct.


> I am not sure Environ is fully compatible with the GPU plugin, as I have not 
> tested yet this combination.

Me neither. 

Rolly, to be safe I would assume ENVIRON and QE-GPU are not compatible but feel 
free to try them together and validate results against a non-GPU execution.

Regards

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://fspiga.github.io ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] 5.2.1 release building issue

2015-11-15 Thread Filippo Spiga
Hello Eric,

can you share config.log file (located in install/) generated by configure? 
Thanks!

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


> On Nov 10, 2015, at 5:21 AM, Éric Germaneau <german...@sjtu.edu.cn> wrote:
> 
> Dear all,
> 
> I'm using Intel compiler 15.0,  MKL 11.2, and IMPI 5.0.1 to compile QE 
> version 5.2.1 but getting the error bellow:
> .
> a - iotk_xtox_interf.o
> ranlib libiotk.a
> mpiifort -O3 -xhost -openmp -mkl -nomodule -openmp -fpp -D__INTEL -D__FFTW 
> -D__MPI -D__PARA -D__OPENMP   -I../include  -c iotk_print_kinds.f90
> make loclib_only
> make[3]: Entering directory `/path/to/QE/5.2.1/espresso-5.2.1/S3DE/iotk/src'
> make[3]: Nothing to be done for `loclib_only'.
> make[3]: Leaving directory `/path/to/QE/5.2.1/espresso-5.2.1/S3DE/iotk/src'
> xild -static-intel  -openmp -o iotk_print_kinds.x iotk_print_kinds.o 
> libiotk.a   
> ipo: warning #11016: Warning unknown option -static-intel
> xild: executing 'ld'
> ld: unrecognized -a option `tic-intel'
> make[2]: *** [iotk_print_kinds.x] Error 1
> make[2]: Leaving directory `/path/to/QE/5.2.1/espresso-5.2.1/S3DE/iotk/src'
> make[1]: *** [libiotk] Error 2
> make[1]: Leaving directory `/path/to/QE/5.2.1/espresso-5.2.1/install'
> make: *** [libiotk] Error 2
> Here is how I proceed:
> export CC="icc"
> export FC="ifort"
> export MPIF90="mpiifort"
> export CFLAGS="-O3 -xhost -openmp -mkl"
> export FFLAGS="-O3 -xhost -openmp -mkl"
> ./configure --enable-parallel --enable-openmp --without-scalapack
> make pw
> I'm wondering if someone else also got this error.
> 
> Thank you.
> 
> -- 
> Éric Germaneau (艾海克), Specialist
> Center for High Performance Computing
> Shanghai Jiao Tong University
> Room 205 Network Center, 800 Dongchuan Road, Shanghai 200240 China
> Email:german...@sjtu.edu.cn Mobi:+86-136-4161-6480 http://hpc.sjtu.edu.cn
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] QE-GPU 14.10.0 with intel compilers

2015-10-29 Thread Filippo Spiga
The binary that uses GPU is located under GPU/PW/pw-gpu.x

Try ldd

$ GPU/PW/pw-gpu.x

you will spot CUDA library dependencies sing that the binary need GPU libraries 
and so will use GPUs.

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


> On Oct 29, 2015, at 11:55 AM, Dr. NG Siu Pang <roll...@cityu.edu.hk> wrote:
> 
> Dear Filippo,
> 
> Apologies for my mistake, my input command was incorrect, now it runs with
> mpirun -genvall -np 8 ./pw.x -inp ~/QE520/AUSURF112/ausurf.in |tee 
> ~/QE520/AUSURF112/ausurf.out
> 
> I see it takes about 26G of RAM and 8 instants of pw.x is running with 'top'.
> 
> However, I check with 'nvidia-smi',
> Thu Oct 29 19:55:07 2015
> +--+
> | NVIDIA-SMI 340.65 Driver Version: 340.65 |
> |---+--+--+
> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr. ECC 
> |
> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute M. 
> |
> |===+==+==|
> |   0  Tesla C2050 Off  | :0C:00.0 Off |0 
> |
> | 30%   52CP0N/A /  N/A |  6MiB /  2687MiB |  0%  Default 
> |
> +---+--+--+
> |   1  Tesla C2050 Off  | :0D:00.0 Off |0 
> |
> | 30%   50CP0N/A /  N/A |  6MiB /  2687MiB |  0%  Default 
> |
> +---+--+--+
> |   2  Tesla C2050 Off  | :11:00.0 Off |0 
> |
> | 30%   54CP0N/A /  N/A |  6MiB /  2687MiB |  0%  Default 
> |
> +---+--+--+
> |   3  Tesla C2050 Off  | :12:00.0 Off |0 
> |
> | 30%   50CP0N/A /  N/A |  6MiB /  2687MiB |  0%  Default 
> |
> +---+--+--+
> |   4  Tesla C2050 Off  | :83:00.0 Off |0 
> |
> | 30%   46CP0N/A /  N/A |  6MiB /  2687MiB |  0%  Default 
> |
> +---+--+--+
> |   5  Tesla C2050 Off  | :84:00.0 Off |0 
> |
> | 30%   45CP0N/A /  N/A |  6MiB /  2687MiB |  0%  Default 
> |
> +---+--+--+
> |   6  Tesla C2050 Off  | :87:00.0 Off |0 
> |
> | 30%   47CP0N/A /  N/A |  6MiB /  2687MiB |  0%  Default 
> |
> +---+--+--+
> |   7  Tesla C2050 Off  | :88:00.0 Off |0 
> |
> | 30%   46CP0N/A /  N/A |  6MiB /  2687MiB |  0%  Default 
> |
> +---+--+--+
> 
> +-+
> | Compute processes:   GPU Memory 
> |
> |  GPU   PID  Process name Usage  
> |
> |=|
> |  No running compute processes found 
> |
> +-+
> 
> It said no compute process found? Is this true?
> 
> Thanks,
> Rolly
> 
> PhD, Research Fellow,
> Department of Physics and Materials Science,
> City University of Hong Kong
> Tel: +852 3442 4000
> Fax:+852 3442 0538
> 
> 
> 
> Disclaimer: This email (including any attachments) is for the use of the 
> intended recipient only and may contain confidential information and/or 
> copyright material. If you are not the intended recipient, please notify the 
> sender immediately and delete this email and all copies from your system. Any 
> unauthorized use, disclosure, reproduction, copying, distribution, or oth

Re: [Pw_forum] QE-GPU 14.10.0 with intel compilers

2015-10-28 Thread Filippo Spiga
On Oct 28, 2015, at 10:36 AM, Dr. NG Siu Pang <roll...@cityu.edu.hk> wrote:
> I would like to run GPU on the examples in PW/examples. How can I do that? 
> Should I modify the run_all_examples script?

it will work out of the box if you run mpirun correctly. You need to play with 
the scripts. Anyway, they are too small to show any sort of acceleration and 
some exploit functionalities of the code that are not GPU accelerated.

If you have a real test case instead of an example I suggest to start directly 
with it. If you have no experience with the code, use CPU only first.

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://fspiga.github.io ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] QE-GPU 14.10.0 with intel compliers

2015-10-28 Thread Filippo Spiga
On Oct 27, 2015, at 5:29 PM, Dr. NG Siu Pang <roll...@cityu.edu.hk> wrote:
> I used ./configure CC=icc F90=ifort F77=ifort MPIF90=mpiifort
> ./PW/tests/check-pw.x.j runs all okay

Weird, in another email you said it was not working.

 
> I have copied GPU to the espresso-5.2.0 folder and I noticed that for 
> parallel configuration, I need to do
> $ cd GPU
> $ ./configure --enable-parallel --enable-openmp --with-scalapack \
>   --enable-cuda --with-gpu-arch=sm_35 \
>   --with-cuda-dir= \
>   --without-magma --with-phigemm
> $ cd ..
> $ make -f Makefile.gpu pw-gpu 
>  
> These looks like using non-intel compliers, so can I use the following intead?
> $ cd GPU
> $ ./configure CC=icc F90=ifort F77=ifort MPIF90=mpiifort \
> --enable-parallel --enable-openmp --with-scalapack \
> --enable-cuda --with-gpu-arch=sm_35 \
> --with-cuda-dir= \
> --without-magma --with-phigemm


This is the server with 8 GPUs C2050? As I mentioned in the other email, 
"--with-gpu-arch" must be "sm_20" otherwise GPU code will fail. If you have 8 
GPU into the same server, no point of having OpenMP and if you run on a single 
server probably no point of having ScaLAPACK.

Try:

$ ./configure CC=icc F90=ifort F77=ifort MPIF90=mpiifort --enable-parallel 
--disable-openmp --without-scalapack --enable-cuda 
--with-gpu-arch=sm_20--with-cuda-dir= 
--with-magma --with-phigemm

You must make sure mpirun handle bindings to cores and scatter across sockets 
(if more than one) properly otherwise performance may sucks. Or, if I guess 
correctly,

export I_MPI_PIN=on
export MPI_PIN_PROCESSOR_LIST=all:map=bunch
mpirun -genvall -np 8 ./pw.x -input ...

HTH

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://fspiga.github.io ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] QE-GPU installation help

2015-10-28 Thread Filippo Spiga
Rolly,

if there is no pw.x executable how can you be so sure that compilation ended 
successfully?

I think you need to start from step 0. Logout from the current termina/ssh 
session (so environment is clean), erase your current espresso directory, untar 
it again, re-run the configure without export anything more than the necessary 
parts to detect Intel compilers and MKL, run "make -j2 all", re-run 
./check-pw.x.j

A good netiquette policy for the mailing-list: do not cut & paste long output 
files, attach them. I personally ignore _every_ email longer than 50 lines. 
Thanks.

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://fspiga.github.io ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."

> On Oct 27, 2015, at 12:35 PM, Dr. NG Siu Pang <roll...@cityu.edu.hk> wrote:
> 
> Dear Paolo,
>  
> Thank you.
>  
> make all has completed.
>  
> But as I run the test,
> zgdeng@NVGPU-P2807:~/QE520/espresso-5.2.0/PW/tests> ./check-pw.x.j
> Checking atom-lsda/check-pw.x.j: line 259: 
> /home/zgdeng/QE520/espresso-5.2.0/PW/src/pw.x: No such file or directory
> FAILED with error condition!
> Input: atom-lsda.in, Output: atom-lsda.out, Reference: atom-lsda.ref
> Aborting
>  
> When I check espresso-5.2.0/PW/src/, there is no pw.x
>  
> What should I do?
>  
> Thank you,
> Rolly
>  
> PhD, Research Fellow,
> Department of Physics and Materials Science,
> City University of Hong Kong
> Tel: +852 3442 4000
> Fax:+852 3442 0538
>  
> From: pw_forum-boun...@pwscf.org [mailto:pw_forum-boun...@pwscf.org] On 
> Behalf Of Paolo Giannozzi
> Sent: 2015年10月27日 18:12
> To: PWSCF Forum
> Subject: Re: [Pw_forum] QE-GPU installation help
>  
>  
>  
> On Tue, Oct 27, 2015 at 10:45 AM, Dr. NG Siu Pang <roll...@cityu.edu.hk> 
> wrote:
> 
> Are these 2 comes with MKL? 
>  
> yes
>  
> How can I tell QE to use the LAPACK_LIBS and FFT_LIBS from MKL?
>  
> you don't need to do anything
> 
> Paolo
> -- 
> Paolo Giannozzi, Dept. Chemistry,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
> 
> 
> Disclaimer: This email (including any attachments) is for the use of the 
> intended recipient only and may contain confidential information and/or 
> copyright material. If you are not the intended recipient, please notify the 
> sender immediately and delete this email and all copies from your system. Any 
> unauthorized use, disclosure, reproduction, copying, distribution, or other 
> form of unauthorized dissemination of the contents is expressly prohibited.
> 
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] QE-GPU installation help

2015-10-25 Thread Filippo Spiga
Dear Rolly,

GCC is fine, you do not need Intel compilers if you do not have a license. MKL 
is preferable.

C2050 are quite old cards, I am not running or testing that GPU architecture 
since at least beginning 2014 and because of the current trends in CPU and GPU 
architecture this old architecture will not be supported further by the 
application (but it should still work). QE 14.10.0 is compatible with the 
latest QE. Just move the directory into the ESPRESSO_ROOT and run the configure 
accordingly. Compute capability of GPU C2050 is "sm_20". MPI is needed to run 
on the 8 GPUs. You can use OpenMPI if you do not have a license for Intel MPI. 
You need phiGEMM (self-compiled), you need MAGMA and you can skip scalapack

Before trying the GPU version of QE please make sure you are able to compile, 
run and test correctness of your results on your server/cluster using the 
simple CPU-only version.

HTH

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://fspiga.github.io ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."

> On Oct 24, 2015, at 3:30 PM, Dr. NG Siu Pang <roll...@cityu.edu.hk> wrote:
> 
> Dear all,
>  
> I would like to test the GPU acceleration on QE and we have a server with 8x 
> C2050 GPU cards running OpenSuSE 12.3.
>  
> I have searched https://github.com/fspiga/QE-GPU, but I found that the latest 
> QE-GPU 14.10.0 is designed for QE 5.1. Since QE-GPU 14.10.0 was dated back to 
> 2014, I am not sure if it works with the latest QE 5.2.1?
>  
> So, I would like to have your advice on QE-GPU installation, and these are 
> already installed on OpenSUSE 12.3
> 1)  CUDA Version: 6.5-14
> 2)  CUDA Version: 6.0-37
> 3)  Nvidia graphic driver, Name: nvidia-gfxG03-kmp-desktop, version: 
> 340.65_k3.7.10_1.1-32.1
> 4)  Nvidia computing driver, Name: nvidia-computeG03, version: 340.65-32.1
>  
> Do I need to installed the following Intel tools instead of gcc?
> 5)  Intel composer_xe_2013 sp1.0.080 (what about the latest version? does 
> it work?)
> 6)  Intel ifort
> 7)  Intel MKL
> 8)  Intel MPI (so MPI is needed to run all 8 GPUs on the server?)
>  
> Do I also need scalapack? Where can I get this?
>  
> Do I need phigemm? Where can I get this as well? 
>  
> Thank you very much,
> Rolly
> 
> 
> Disclaimer: This email (including any attachments) is for the use of the 
> intended recipient only and may contain confidential information and/or 
> copyright material. If you are not the intended recipient, please notify the 
> sender immediately and delete this email and all copies from your system. Any 
> unauthorized use, disclosure, reproduction, copying, distribution, or other 
> form of unauthorized dissemination of the contents is expressly prohibited.
> 
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Version 5.2.1 for the PP program

2015-09-28 Thread Filippo Spiga
Dear Natalie,

which version of intel compiler? We need to track these information to fix 
backward compatibility and increase our testing coverage. Anyway, if needed I 
can generate a patch to revert only these specific issue within the next 24h.

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://fspiga.github.io ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."

> On Sep 27, 2015, at 11:55 PM, Holzwarth, Natalie <nata...@wfu.edu> wrote:
> 
> In testing the new 5.2.1 version of the program, I noticed that one change 
> from 5.1 was in the partial density of states output from partialdos.f90 for 
> the title line.
> 
> For example, in version 5.1, the title line for the pdos output used
> WRITE(4,'(" pdos(E)   ",$)')
> 
> while in version 5.2.1 the same pdos output is changed to
> WRITE(4,'(" pdos(E)   "), advance="NO"')
> 
> For our intel compiler, the 5.1 version keeps the title line on a single 
> line, while the 5.2.1 version puts each piece of the title line on a 
> different line.   I guess it is not processing the  advance="NO" statement.   
>  Since this title line is making the postprocessing difficult, I am tempted 
> to put those write statements back to the 5.1 version, or perhaps there is a 
> better solution.Thanks in advance for your advice on this.   Natalie
> 
> N. A. W. Holzwarth   email: 
> nata...@wfu.edu
> Department of Physics  web: 
> http://www.wfu.edu/~natalie
> Wake Forest University phone: 1-336-758-5510 
> Winston-Salem, NC 27109 USA office: Rm. 300 Olin Physical 
> Lab
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


[Pw_forum] Version 5.2.1 of Quantum ESPRESSO is available for download

2015-09-24 Thread Filippo Spiga
Dear everybody,

I am pleased to announce you that Version 5.2.1 of Quantum ESPRESSO (SVN 
revision 11758) is now available for download. 

You can find all related packages published on the QE-FORGE website at this 
link: 
http://qe-forge.org/gf/project/q-e/frs/?action=FrsReleaseView_id=199

Or download directly espresso-5.2.1.tar.gz here: 
http://qe-forge.org/gf/download/frsrelease/199/855/espresso-5.2.1.tar.gz

Please refer to the file "Doc/release-notes" for additional details about the 
release (new features, fixes, incompatibilities, known bugs). For any new bug, 
suspicious misbehavior or any proven reproducible wrong result please get in 
contact with the developers by writing directly to q-e-develop...@qe-forge.org .

Happy Computing!

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://fspiga.github.io ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] INSTALL QE ON MAC PRO SIX CORE (INTEL XEON E5)

2015-09-07 Thread Filippo Spiga
On Sep 7, 2015, at 5:05 PM, reyna mendez camacho <reyna1...@hotmail.com> wrote:
> 5.- Change to the espresso directory and run the configure script
>cd espresso-5.1
>   export LAPACK_LIBS="-mkl=parallel"
>   export BLAS_LIBS= "-mkl=parallel"
>export FFT_LIBS= "-mkl=parallel" 
>export MPIF90=mpiifort
>export AR=xiar
>./configure --enable-openmp


And please do not export all those variables before run the configure. The 
configure in the QE 5.1 should be smart enough to pick the right MKL library 
without you exporting manually some of those variables. Try without and see how 
it goes.

If you are going to run QE on a single six core socket,  avoid OpenMP if 
parallel is enabled or disable parallel and leverage OpenMP. Both together do 
not have much sense of such a few amount of cores.

HTH

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://fspiga.github.io ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] about MPI and OpenMP threads for QE-GPU

2015-09-01 Thread Filippo Spiga
On Aug 25, 2015, at 1:46 AM, Mutlu COLAKOGULLARI 
<mutlucolakogull...@trakya.edu.tr> wrote: 
> So, I am in trouble for sharing the cores between MPI and threads for OpenMP. 

Simple rules valid for QE-GPU runs:
- one MPI for each GPU
- if number of GPU is equal number of socket in the node, set as many OpenMP 
threads as core per socket
- if number of GPU is not equal number of socket in the node, set OpenMP 
threads to fill all cores of the node or (probably better) do not use OpenMP at 
all (= MPI + GPU, --disable-openmp)

If you make it works following these rules and you get numbers out of it then 
you can double the number of MPI for each GPU (instead of 1:1 ratio, 2:1 ratio).

HTH

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] about MPI and OpenMP threads for QE-GPU

2015-09-01 Thread Filippo Spiga
Dear Mutlu,

On Aug 25, 2015, at 1:46 AM, Mutlu COLAKOGULLARI 
<mutlucolakogull...@trakya.edu.tr> wrote:
> QE-GPU has been installed by intel cluster suite 13, cuda 5.5 and latest svn 
> commits of QE and QE-GPU. 

Please try QE 5.1.2 or 5.2.0 as reference versions, not the SVN.

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Numeric of

2015-08-04 Thread Filippo Spiga
Dear Cheung,

do not use the public version of QE-GPU 14.x for spin magnetization 
calculation. I am aware of this problem that appear in some corner cases since 
a while but the code does not automatically warn people they are running using 
an unsupported feature. Maybe in the future unsupported feature need to trigger 
an explicit ABORT to warn users.

Use the CPU version for now, the problem will be fixed soon.

F

On Aug 3, 2015, at 6:58 PM, Cheung, Samson H. (ARC-TN)[Computer Sciences 
Corporation] <samson.h.che...@nasa.gov> wrote:
> Sorry, I have not finished the email!!
> Let me continue:
> 
> 
> pw_cpu.out:
>  Magnetic moment per site:
>  atom:1charge:1.6134magn:   -0.constr:0.
>  atom:2charge:1.6136magn:0.constr:0.
>  atom:3charge:1.6135magn:   -0.constr:0.
>  atom:4charge:1.6136magn:   -0.constr:0.
> …
> 
> pw_gpu.out:
>  Magnetic moment per site:
>  atom:1charge:2.0472magn:   -0.7654constr:0.
>  atom:2charge:2.0434magn:   -0.7580constr:0.
>  atom:3charge:2.0454magn:   -0.7623constr:0.
>  atom:4charge:2.0426magn:   -0.7574constr:0.
>  …
> 
> Do you have any theory about the numerical difference between the CPU and GPU 
> case?
> Many thanks!!
> 
> ~Samson
> 
> 
> From: "Cheung, Samson H. (ARC-TN)[Computer Sciences Corporation]" 
> <samson.h.che...@nasa.gov>
> Date: Monday, August 3, 2015 at 10:56 AM
> To: "spiga.fili...@gmail.com" <spiga.fili...@gmail.com>
> Cc: "pw_forum@pwscf.org" <pw_forum@pwscf.org>
> Subject: Numeric of 
> 
> 
> 
> 
> I downloaded GRIR443 case from 
> http://qe-forge.org/gf/project/q-e/frs/?action=FrsReleaseBrowse_package_id=36
> 
> The way I compile the CPU case is follow:
> %> setenv FC ifort
> %> setenv LDFLAGS -lmpi
> %> setenv CC icc
> %> setenv FCFLAGS "-O3 -axCORE-AVX2 -xSSE4.2 -assume byterecl"
> %>  ./configure --prefix=/nobackup/scheung/espresso LIBDIRS="-mkl"
> 
> The way I get and compile the GPU add-on is follow:
> %> wget https://github.com/fspiga/QE-GPU/archive/v14.10.0.tar.gz
> %> tar zxvf v14.10.0.tar.gz
> %> mv QE-GPU-14.10.0/GPU espresso-5.1.2/
> %> cd espresso-5.1.2/GPU/
> %> module load cuda/6.5
> %> cd GPU/
> %> ./configure --enable-parallel --enable-openmp --enable-cuda 
> --with-gpu-arch=sm_35 --with-cuda-dir=/nasa/cuda/6.5 --without-magma 
> --with-phigemm  --without-scalapack
> %> cd ..
> %> make -f Makefile.gpu -j4 pw-gpu
> 
> 
> The  way I ran the codes is follow:
> mpiexec -np 80 ./pw-gpu.x  -input grir443.in > pw_gpu.out
> mpiexec -np 80 ./pw.x  -input grir443.in > pw_cpu.out
> 
> However, I see the differences between the numeric reported from pw_gpu.out 
> and pw_cpu.out:
> 
> pw_cpu.out:
>  Magnetic moment per site:
>  atom:1charge:1.6134magn:   -0.constr:0.
>  atom:2charge:1.6136magn:0.constr:0.
>  atom:3charge:1.6135magn:   -0.constr:0.
>  atom:4charge:1.6136magn:   -0.constr:0.
> 
> 
> 

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://fspiga.github.io ~ skype: filippo.spiga

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Error in routine d_matrix_so

2015-07-06 Thread Filippo Spiga
FYI the latest version of QE is 5.2.0. See 
http://qe-forge.org/gf/project/q-e/frs/?action=FrsReleaseView_id=195

F

On Jul 3, 2015, at 11:06 AM, Rajdeep Banerjee <rajdeep@gmail.com> wrote:
> Dear Prof. Paolo Giannozzi,
>  ok, I'll run it in espresso-5.1.1 
> and get back to you.
> 
> 
> Thanks and regards,
> -- 
> Rajdeep Banerjee
> PhD student
> JNCASR, Bangalore
> India
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] [QE-GPU] compilation error

2015-06-26 Thread Filippo Spiga
Dear Mohammed,

please next time use you institutional email address or, at least, do write 
from which institution you come from.

Try to use Intel MKL or ATLAS (that may require manual changes in the make.sys) 
or OpenBLAS (this may require as well manual changes in the make.sys). Once you 
are able to successfully run the configure and make using GNU + Intel MKL then 
enable GPU support.

HTH
F

On Jun 25, 2015, at 5:17 PM, mohammed shambakey <shambak...@gmail.com> wrote:
> Hi
> 
> I'm trying to compile the gpu version. I'm using v5.1.2 for quantum_espresso 
> and gpu version 14.10.0.
> 
> As there is no patches for version 5.1.2, I didn't apply any patches.
> 
> For configuration, I use:
> ./configure --enable-parallel --enable-openmp --with-scalapack --enable-cuda 
> --with-gpu-arch=all --with-cuda-dir=$CUDA_PATH --without-magma --with-phigemm
> 
> Configuration succeeded, but with warnings. Configuration output is attached 
> as "compilation_err.log".
> 
> For compilation, I use:
> make -f Makefile.gpu all-gpu
> 
> but compilation fails with error:
> make[2]: *** No rule to make target `cublas'.  Stop.
> make[2]: Leaving directory `/home/q_esp/espresso-5.1.2/BLAS'
> make[1]: *** [libblas_internal] Error 2
> make[1]: Leaving directory `/home/q_esp/espresso-5.1.2/install'
> make: *** [libblas] Error 2
> 
> Compilation output is attached as "compilation_err.log".
> 
> Please help
> 
> Regards
> 
> 
> 
> -- 
> Mohammed
> ___________
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] [*] Re: [qe-gpu]

2015-06-26 Thread Filippo Spiga
On Jun 22, 2015, at 1:01 PM, Anubhav Kumar <kanub...@iitk.ac.in> wrote:
> Dear Filippo
> 
> Thank you for your reply.
>> Compile the code in serial, make sure you export CUDA_VISIBLE_DEVICES
>> accordingly to target the GPU you want.
> 
> Can't we run multiple mpi processes on single gpu?

Yes, you can but no more than two. QE try to maximize memory usage on the GPU 
so if you put more than 2 MPI per single GPU it becomes slow. In your case you 
do not want to use more than 4 MPI in total. 1:1 is still the best scenario.

To better use the GPU in this way you need to use NVIDIA MPSS. Setting up 
NVIDIA  MPSS is trivial but you need again to do the binding manually. This is 
not something that a non-expert user can do without support. I suggest to look 
for someone in your institution that is expert of GPU or a expert sys admin 
that can help you with this scenario. By email... it is kind of difficult (and 
it is _not_ the scope of this mailing-list)

HTH

Cheers,
F

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


[Pw_forum] Version 5.2.0 of Quantum ESPRESSO is available for download

2015-06-22 Thread Filippo Spiga
Dear everybody,

I am pleased to announce you that Version 5.2.0 of Quantum ESPRESSO (SVN 
revision 11602) is now available for download. 


You can find all related packages published on the QE-FORGE website at this 
link: 
http://qe-forge.org/gf/project/q-e/frs/?action=FrsReleaseView_id=195

Or download directly espresso-5.2.0.tar.gz here: 
http://qe-forge.org/gf/download/frsrelease/195/806/espresso-5.2.0.tar.gz


Please refer to the file "Doc/release-notes" for additional details about the 
release (new features, fixes, incompatibilities, known bugs). For any new bug, 
suspicious misbehavior or any proven reproducible wrong result please get in 
contact with the developers by writing directly to q-e-develop...@qe-forge.org .

Happy Computing!

Best Regards,
Filippo

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] [qe-gpu]

2015-06-18 Thread Filippo Spiga
On Jun 17, 2015, at 1:18 PM, nihal...@iitk.ac.in wrote:
> I am trying to run QE-GPU with CUDA 7.0.

QE-GPU is not tested for CUDA 7.0. It will probably compile but I haven't done 
any validation using the latest driver and the latest SDK.


> I have 3 GPU on my system, but i am want to run it on single GPU.

Compile the code in serial, make sure you export CUDA_VISIBLE_DEVICES 
accordingly to target the GPU you want.

Regards,
Filippo

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] [qe-gpu]

2015-06-13 Thread Filippo Spiga
Dear Anubhav,

run in parallel, 2 MPI and make sure CUDA_VISIBLE_DEVICES is set such

MPI rank 0 -> GPU id 1 (K20)
MPI rank 1 -> GPU id 2 (K20)

Those K20 GPU are active cooled cards, how many sockets this server (or 
workstation?) have?

F
 
> On Jun 13, 2015, at 11:08 AM, Anubhav Kumar <kanub...@iitk.ac.in> wrote:
> 
> Dear QE users
> 
> I have configured qe-gpu 14.10.0 with espresso-5.1.2.Parallel compilation
> was successful, but when i run ./pw-gpu.x it gives the following output
> 
> ***WARNING: unbalanced configuration (1 MPI per node, 3 GPUs per node)
> 
> ***
> 
>   GPU-accelerated Quantum ESPRESSO (svn rev. unknown)
>   (parallel: Y , MAGMA : N )
> 
> ***
> 
> 
> Program PWSCF v.5.1.2 starts on 13Jun2015 at 15:23:59
> 
> This program is part of the open-source Quantum ESPRESSO suite
> for quantum simulation of materials; please cite
> "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
>  URL http://www.quantum-espresso.org;,
> in publications or presentations arising from this work. More details at
> http://www.quantum-espresso.org/quote
> 
> Parallel version (MPI & OpenMP), running on  24 processor cores
> Number of MPI processes: 1
> Threads/MPI process:24
> Waiting for input...
> 
> 
> However when i again run the same command, it gives
> 
> ***WARNING: unbalanced configuration (1 MPI per node, 3 GPUs per node)
> 
> Program received signal SIGSEGV: Segmentation fault - invalid memory
> reference.
> 
> Backtrace for this error:
> #0  0x7FB5001B57D7
> #1  0x7FB5001B5DDE
> #2  0x7FB4FF4C4D3F
> #3  0x7FB4F3391D40
> #4  0x7FB4F33666C3
> #5  0x7FB4F3364C80
> #6  0x7FB4F33759EF
> #7  0x7FB4F345CA1F
> #8  0x7FB4F345CD2F
> #9  0x7FB500B7DBCC
> #10  0x7FB500B7094F
> #11  0x7FB500B7CC56
> #12  0x7FB500B81410
> #13  0x7FB500B7507B
> #14  0x7FB500B6179D
> #15  0x7FB500B940A0
> #16  0x7FB5009BA047
> #17  0x8A4EA3 in phiGemmInit
> #18  0x76F55E in initcudaenv_
> #19  0x66AE90 in __mp_MOD_mp_start at mp.f90:184
> #20  0x66E192 in __mp_world_MOD_mp_world_start at mp_world.f90:58
> #21  0x66DCC0 in __mp_global_MOD_mp_startup at mp_global.f90:65
> #22  0x4082A0 in pwscf at pwscf.f90:23
> #23  0x7FB4FF4AFEC4
> Segmentation fault
> 
> Kindly help me out in solving the problem. My GPU details are
> 
> +--+
> | NVIDIA-SMI 346.46 Driver Version: 346.46 |
> |---+--+--+
> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr.
> ECC |
> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute
> M. |
> |===+==+==|
> |   0  Tesla C2050 Off  | :02:00.0  On |  
> 0 |
> | 30%   62C   P12N/A /  N/A | 87MiB /  2687MiB |  0% 
> Default |
> +---+--+--+
> |   1  Tesla K20c  Off  | :83:00.0 Off |  
> 0 |
> | 42%   55CP046W / 225W |   4578MiB /  4799MiB |  0% 
> Default |
> +---+--+--+
> |   2  Tesla K20c  Off  | :84:00.0 Off |  
> 0 |
> | 34%   46CP817W / 225W | 14MiB /  4799MiB |  0% 
> Default |
> +---+--+--+
> 
> +-+
> | Processes:   GPU
> Memory |
> |  GPU   PID  Type  Process name   Usage  
>   |
> |=|
> |    1 27680C   ./pw-gpu.x   
> 4563MiB |
> +-+
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] How to disable parallel configuration

2015-06-08 Thread Filippo Spiga
Dear Gargee,

Are you running QE-GPU on your laptop? Most likely it is not going to work or, 
if works, it is going to be slow anyway. GPU embedded in laptops are not 
powerful enough, QE-GPU works better with NVIDIA TESLA GPU (e.g. K20, K40 and 
K80). These cards are designed for compute workload.

HTH

Regards,
Filippo

On Jun 8, 2015, at 8:51 AM, Gargee Bhattacharyya 
<bhattacharyya.gar...@gmail.com> wrote:
> 
> Sir ,
>I am having error of following type
> 
> MPI_ABORT was invoked on rank 0 in communication MPI_COMM_WORLD with error 
> code 0. I am running my program from my laptop. I am trying to disable 
> parallel configuration from the help of following link :
> 
> https://github.com/fspiga/QE-GPU
> 
> I shall be highly obliged if you help me to disable the parallel 
> configuration in my laplop. 
> 
> 
>   calculation='scf'
>   restart_mode='from_scratch'
>   tstress=.true.
>   tprnfor=.true.
>   prefix='ZnO'
>   pseudo_dir='/home/iit/GARGEE/espresso-5.1.2/pseudo/',
>   out_dir='home/iit/GARGEE/ZnO/',
>   forc_conv_thr=1.D-4
> /
> 
>   ibrav=4
>   nat=4
>   ntyp=2
>   A=3.2495
>   B=3.2495
>   C=3.2495
>   cosAB=0
>   cosBC=0
>   cosAC=-0.577
>   ecutwfc=55
>   ecutrho=440
> /
> 
>   conv_thr=1.0e-10
> /
> 
> /
> 
> /
> _SPECIES
>   Zn 65  Zn.pbe-van.UPF
>O 16   O.pbe-van_ak.UPF
> /
> _POSITIONS(crystal)
>   Zn  0.0   0.0   0.0
>   Zn  0.333 0.666 0.5
>   O   0.0   0.0   0.345
>   O   0.333 0.666 0.845 
> K_POINTS automatic
>   4 4 4 0 0 0
> 
> 
> -- 
> Yours sincerely 
> 
> Gargee Bhattacharyya
> ​PhD Student
> Materials Sciences & Engineering
> Indian Institute ​​of Technology, Indore
> ​
> M.Tech (VLSI Design & Microelectronics Technology)
> Department of ETCE
> Jadavpur University 
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] [*] Re: [qe-gpu]

2015-05-30 Thread Filippo Spiga
FUNCTION statement at (1)
> Fatal Error: Error count reached limit of 25.
> make[1]: *** [rdiaghg.o] Error 1
> make[1]: Leaving directory `/home/anubhav/Downloads/espresso-5.1.2/PW/src'
> make: *** [pw-lib] Error 2
> 
> Can you please help me out?
> 
> 
> 
> 
> 
> 
>> Dear Anubhav,
>> 
>> make sure you clean properly before re-run configure and make
>> 
>> make -f Makefile.gpu distclean
>> ./configure ...
>> make  -f Makefile.gpu pw-gpu
>> 
>> F
>> 
>> On May 27, 2015, at 8:54 AM, kanub...@iitk.ac.in wrote:
>>> Dear QE users
>>> 
>>> I was configuring  qe-gpu 14.10.0 with espresso-5.1.2 on ubuntu 14.04
>>> .Serial Configuration was successful, but when i run makefile, it gives
>>> me
>>> the following error
>>> 
>>> /usr/bin/ld:
>>> /home/anubhav/Downloads/espresso-5.1.2/GPU/..//qe-magma/lib/libmagma.a(ztrevc3_mt.o):
>>> undefined reference to symbol '__cxa_pure_virtual@@CXXABI_1.3'
>>> //usr/lib/x86_64-linux-gnu/libstdc++.so.6: error adding symbols: DSO
>>> missing from command line
>>> collect2: error: ld returned 1 exit status
>>> make[1]: *** [pw-gpu.x] Error 1
>>> make[1]: Leaving directory
>>> `/home/anubhav/Downloads/espresso-5.1.2/GPU/PW'
>>> make: *** [pw-gpu] Error 2
>>> 
>>> As suggested in one the mail, i configured qe-gpu without magma.Then
>>> after
>>> running make file, i am getting the following error
>>> 
>>> ../Modules/libqemodgpu.a(cuda_init.o): In function `initcudaenv_':
>>> tmpxft_384c_-3_cuda_init.cudafe1.cpp:(.text+0x8f0):
>>> undefined
>>> reference to `magma_init'
>>> ../Modules/libqemodgpu.a(cuda_init.o): In function `closecudaenv_':
>>> tmpxft_384c_-3_cuda_init.cudafe1.cpp:(.text+0xa5a):
>>> undefined
>>> reference to `magma_finalize'
>>> libpwgpu.a(cdiaghg_gpu.o): In function `cdiaghg_gpu_':
>>> /home/anubhav/Downloads/espresso-5.1.2/GPU/PW/cdiaghg_gpu.f90:146:
>>> undefined reference to `magmaf_zhegvx_'
>>> collect2: error: ld returned 1 exit status
>>> make[1]: *** [pw-gpu.x] Error 1
>>> make[1]: Leaving directory
>>> `/home/anubhav/Downloads/espresso-5.1.2/GPU/PW'
>>> make: *** [pw-gpu] Error 2
>>> Please someone help me out
>>> 
>>> Anubhav Kumar
>>> IITK
>>> 
>>> ___
>>> Pw_forum mailing list
>>> Pw_forum@pwscf.org
>>> http://pwscf.org/mailman/listinfo/pw_forum
>> 
>> --
>> Mr. Filippo SPIGA, M.Sc.
>> http://fspiga.github.io ~ skype: filippo.spiga
>> 
>> «Nobody will drive us out of Cantor's paradise.» ~ David Hilbert
>> 
>> *
>> Disclaimer: "Please note this message and any attachments are CONFIDENTIAL
>> and may be privileged or otherwise protected from disclosure. The contents
>> are not to be disclosed to anyone other than the addressee. Unauthorized
>> recipients are requested to preserve this confidentiality and to advise
>> the sender immediately of any error in transmission."
>> 
>> 
>> 
>> ___
>> Pw_forum mailing list
>> Pw_forum@pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
> 
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] QE-GPU

2015-05-30 Thread Filippo Spiga
Dear Trinh,

as reported in the README file, QE-GPU is a "plugin-like" type or package. A 
patch needs to be applied to QE old versions based on the combination QE & 
QE-GPU. It sounds obvious to me (but I will make the statement clear in the 
future release) that if you are using QE 5.1.2 and QE-GPU v14.10 then you do 
not need a patch. Just follow few simple instructions. If you are interested to 
run QE 5.1.2 and play with GPU support, everything you need is v14.10.0. It 
will work.

As proof, here the steps I just did on a machine I have in Italy.

wget http://qe-forge.org/gf/download/frsrelease/185/753/espresso-5.1.2.tar.gz
tar zxvf espresso-5.1.2.tar.gz
wget https://github.com/fspiga/QE-GPU/archive/v14.10.0.tar.gz
tar zxvf v14.10.0.tar.gz
mv QE-GPU-14.10.0/GPU espresso-5.1.2/
cd espresso-5.1.2/GPU/
module load cuda/6.5
cd GPU/
./configure --enable-parallel --enable-openmp --enable-cuda 
--with-gpu-arch=sm_35 --with-cuda-dir=${CUDA_INSTALL_PATH} --without-magma 
--with-phigemm  --without-scalapack
cd ..
make -f Makefile.gpu -j4 pw-gpu


Compilation finished correctly, the executable pw-gpu.x is placed under bin/

$ ldd bin/pw-gpu.x
linux-vdso.so.1 =>  (0x7fff423f9000)
libmkl_gf_lp64.so => 
/home/fspiga/intel/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_gf_lp64.so 
(0x7fe53cea2000)
libmkl_gnu_thread.so => 
/home/fspiga/intel/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_gnu_thread.so
 (0x7fe53c345000)
libmkl_core.so => 
/home/fspiga/intel/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_core.so 
(0x7fe53ae17000)
libcublas.so.6.5 => /opt/cuda/6.5/lib64/libcublas.so.6.5 
(0x7fe539375000)
libcufft.so.6.5 => /opt/cuda/6.5/lib64/libcufft.so.6.5 
(0x7fe53695)
libcudart.so.6.5 => /opt/cuda/6.5/lib64/libcudart.so.6.5 
(0x7fe53670)
libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 
(0x7fe5363d3000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7fe5360cc000)
libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 
(0x7fe535ebd000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
(0x7fe535ca7000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7fe535a88000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7fe5356c3000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7fe5354bf000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7fe5352b6000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
(0x7fe534fb2000)
libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
(0x7fe534d75000)
/lib64/ld-linux-x86-64.so.2 (0x7fe53d5c6000)

An important note: if you are working on a version of QE that has been heavily 
customized, I do not guarantee that QE-GPU will work. To discuss such scenario, 
better if you contact me privately.

Regards,
Filippo


On May 28, 2015, at 7:28 PM, Vo, Trinh (398K) <trinh...@jpl.nasa.gov> wrote:
> Dear PWSCF users,
> 
> As far as we can see, the GPU version of espresso (QE-GPU) is a patch. Coming 
> with the espresso-5.1.2, there is a version of GPU patch named v14.06.0.  
> This v14.06 does not match the espresso-5.1.2. Download the latest version of 
> QE-GPU (14.10.0), and we see that this QE-GPU is for espresso-5.0.2.  
> 
> Which one is the correct one to use?
> 
> Thank you,
> 
> Trinh
> Data Science Modeling and Computing Group
> JPL/CalTech
> 
> ___________
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] [qe-gpu]

2015-05-27 Thread Filippo Spiga
Dear Anubhav,

make sure you clean properly before re-run configure and make

make -f Makefile.gpu distclean
./configure ...
make  -f Makefile.gpu pw-gpu

F

On May 27, 2015, at 8:54 AM, kanub...@iitk.ac.in wrote:
> Dear QE users
> 
> I was configuring  qe-gpu 14.10.0 with espresso-5.1.2 on ubuntu 14.04
> .Serial Configuration was successful, but when i run makefile, it gives me
> the following error
> 
> /usr/bin/ld:
> /home/anubhav/Downloads/espresso-5.1.2/GPU/..//qe-magma/lib/libmagma.a(ztrevc3_mt.o):
> undefined reference to symbol '__cxa_pure_virtual@@CXXABI_1.3'
> //usr/lib/x86_64-linux-gnu/libstdc++.so.6: error adding symbols: DSO
> missing from command line
> collect2: error: ld returned 1 exit status
> make[1]: *** [pw-gpu.x] Error 1
> make[1]: Leaving directory `/home/anubhav/Downloads/espresso-5.1.2/GPU/PW'
> make: *** [pw-gpu] Error 2
> 
> As suggested in one the mail, i configured qe-gpu without magma.Then after
> running make file, i am getting the following error
> 
> ../Modules/libqemodgpu.a(cuda_init.o): In function `initcudaenv_':
> tmpxft_384c_-3_cuda_init.cudafe1.cpp:(.text+0x8f0): undefined
> reference to `magma_init'
> ../Modules/libqemodgpu.a(cuda_init.o): In function `closecudaenv_':
> tmpxft_384c_-3_cuda_init.cudafe1.cpp:(.text+0xa5a): undefined
> reference to `magma_finalize'
> libpwgpu.a(cdiaghg_gpu.o): In function `cdiaghg_gpu_':
> /home/anubhav/Downloads/espresso-5.1.2/GPU/PW/cdiaghg_gpu.f90:146:
> undefined reference to `magmaf_zhegvx_'
> collect2: error: ld returned 1 exit status
> make[1]: *** [pw-gpu.x] Error 1
> make[1]: Leaving directory `/home/anubhav/Downloads/espresso-5.1.2/GPU/PW'
> make: *** [pw-gpu] Error 2
> Please someone help me out
> 
> Anubhav Kumar
> IITK
> 
> _______
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] [qe-gpu]

2015-05-22 Thread Filippo Spiga
Dear Anubhav Kumar,

this is a problem of your system, please check GCC/GFORTRAN are installed and 
Ubuntu 14.04 does not use weird default names for C compiler. Moreover try to 
compile the CUDA SDK example to see if CUDA is installed properly in the system.

FYI, QE-GPU 14.xx is not tested (yet) with CUDA 7.0. I suggest to roll back to 
CUDA 6.5 (or keep CUDA 7.0 driver but use SDK from 6.5)

Regards,
Filippo

On May 22, 2015, at 10:35 AM, kanub...@iitk.ac.in wrote:
> Dear Sir
> 
> I was configuring  qe-gpu 14.10.0 with espresso-5.1.2 and cuda 7.0 on
> ubuntu 14.04 .Serial Configuration was successful, but on giving the
> command 'make -f Makefile.gpu pw-gpu' i am getting the following error
> 
> 
> ccbin gcc -O3 --compiler-options '-c -fPIC -fopenmp'
> -D__PHIGEMM_WEAK_INTERFACES -D__PHIGEMM_ENABLE_SPECIALK 
> -I/home/anubhav/Downloads/espresso-5.1.2/GPU/..//phiGEMM/include
> -I/include -I../include/ -c phigemm_env.c -o phigemm_env.o
> make[3]: ccbin: Command not found
> make[3]: [phigemm_env.o] Error 127 (ignored)
> ccbin gcc -O3 --compiler-options '-c -fPIC -fopenmp'
> -D__PHIGEMM_WEAK_INTERFACES -D__PHIGEMM_ENABLE_SPECIALK 
> -I/home/anubhav/Downloads/espresso-5.1.2/GPU/..//phiGEMM/include
> -I/include -I../include/ -c phigemm_auxiliary.c -o phigemm_auxiliary.o
> make[3]: ccbin: Command not found
> make[3]: [phigemm_auxiliary.o] Error 127 (ignored)
> ccbin gcc -O3 --compiler-options '-c -fPIC -fopenmp'
> -D__PHIGEMM_WEAK_INTERFACES -D__PHIGEMM_ENABLE_SPECIALK 
> -I/home/anubhav/Downloads/espresso-5.1.2/GPU/..//phiGEMM/include
> -I/include -I../include/ -c phigemm_dgemm.c -o phigemm_dgemm.o
> make[3]: ccbin: Command not found
> make[3]: [phigemm_dgemm.o] Error 127 (ignored)
> ccbin gcc -O3 --compiler-options '-c -fPIC -fopenmp'
> -D__PHIGEMM_WEAK_INTERFACES -D__PHIGEMM_ENABLE_SPECIALK 
> -I/home/anubhav/Downloads/espresso-5.1.2/GPU/..//phiGEMM/include
> -I/include -I../include/ -c phigemm_dgemm_specialK.c -o
> phigemm_dgemm_specialK.o
> make[3]: ccbin: Command not found
> make[3]: [phigemm_dgemm_specialK.o] Error 127 (ignored)
> ccbin gcc -O3 --compiler-options '-c -fPIC -fopenmp'
> -D__PHIGEMM_WEAK_INTERFACES -D__PHIGEMM_ENABLE_SPECIALK 
> -I/home/anubhav/Downloads/espresso-5.1.2/GPU/..//phiGEMM/include
> -I/include -I../include/ -c phigemm_zgemm.c -o phigemm_zgemm.o
> make[3]: ccbin: Command not found
> make[3]: [phigemm_zgemm.o] Error 127 (ignored)
> ccbin gcc -O3 --compiler-options '-c -fPIC -fopenmp'
> -D__PHIGEMM_WEAK_INTERFACES -D__PHIGEMM_ENABLE_SPECIALK 
> -I/home/anubhav/Downloads/espresso-5.1.2/GPU/..//phiGEMM/include
> -I/include -I../include/ -c phigemm_zgemm_specialK.c -o
> phigemm_zgemm_specialK.o
> make[3]: ccbin: Command not found
> make[3]: [phigemm_zgemm_specialK.o] Error 127 (ignored)
> mkdir -p ../bin ../lib
> ar ruv libphigemm.a phigemm_auxiliary.o phigemm_env.o phigemm_dgemm.o
> phigemm_dgemm_specialK.o phigemm_zgemm.o phigemm_zgemm_specialK.o
> ar: creating libphigemm.a
> ar: phigemm_auxiliary.o: No such file or directory
> make[3]: *** [static] Error 1
> make[3]: Leaving directory
> `/home/anubhav/Downloads/espresso-5.1.2/phiGEMM/src'
> make[2]: *** [phigemm] Error 2
> make[2]: Leaving directory `/home/anubhav/Downloads/espresso-5.1.2/phiGEMM'
> make[1]: *** [libphiGEMM] Error 2
> make[1]: Leaving directory
> `/home/anubhav/Downloads/espresso-5.1.2/GPU/install'
> make: *** [libphiGEMM] Error 2
> 
> Please help me out.
> 
> Anubhav Kumar
> IITK
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] [qe-gpu]

2015-05-13 Thread Filippo Spiga
Dear I-do-not-know-your-name,

the suggestion is to remove both packages, check permission of your directories 
and re-download them. Based on the little information provided, it looks like a 
unix/system problem and not a QE/QE-GPU problem.

HTH
F

On May 14, 2015, at 6:48 AM, kanub...@iitk.ac.in wrote:
> Dear Sir,
> 
> I was configuring qe-gpu 14.10.0 with espresso-5.1.2 on ubuntu 14.04.I am
> getting the following error- "./configure: line 51: ./install/configure:
> Permission denied"
> Please suggest me a way to configure it properly.
> 
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] QE-GPU

2015-04-16 Thread Filippo Spiga
Dear H.Benaissa,

with a manual hack the code will compile and run but Maxwell cards have poor 
double precision support. Assuming that you will not hit constrains due to 
limited about of memory on the card, I doubt you will see any acceleration. 

My suggestion is to run using a TESLA computing product (K20, K40 or K80).

F

On Apr 16, 2015, at 8:54 AM, H.Benaissa <ben_u...@yahoo.fr> wrote:
> Hi,
> can we use a graphic card of 5.2 compute capability to run QE-GPU calculation
> 
> thank you in advance

--
Mr. Filippo SPIGA, M.Sc.
http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] [Q-e-developers] Problem with Bands.x calculation

2015-04-01 Thread Filippo Spiga
Dear  Sayan,

your request suits more  PWSCF Forum than q-e-developers. bands.x is part of PP 
package, make sure you do "make pp" or "make all" otherwise the executable does 
not appear. If for example pw.x is compiled successfully but not bands.x then 
is weird. Are you able to run the configure?

F

On Mar 30, 2015, at 11:52 AM, sayan chaudhuri <csayan...@gmail.com> wrote:
> 
> 
> Hi,
>I am trying to learn Quantum Espresso for last couple of weeks to draw the 
> band structures of my samples. But I couldn't find the Bands.x file in any of 
> the directories. Can you tell me what is the problem? Is it because of 
> problem during installation? How can I get the file or is there any other 
> procedure to calculate the band structure without using it.
>  Thanking You,
>Sayan Chaudhuri
>   IIT Indore
>
> 
> ___
> Q-e-developers mailing list
> q-e-develop...@qe-forge.org
> http://qe-forge.org/mailman/listinfo/q-e-developers

--
Mr. Filippo SPIGA, M.Sc.
http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] QE_Parallel

2015-03-31 Thread Filippo Spiga
Dear Gul,

it might help a lot to understand three things:
- where you are running your code (meaning, the hardware platform)
- how you compile the code (meaning how you run configure)
- what libraries and compilers are you using (meaning your environment)

With the little information you provided, there is not much we can do to help.

Cheers,
Filippo


On Mar 31, 2015, at 4:24 PM, Gul Rahman <gulrah...@qau.edu.pk> wrote:
> Hello,
> I just joined the PW forum. I am not a new user to DFT, but new to QE code.
> I have successfully installed (Parallel) QE , but I feel that my job (just 2 
> atoms) took very long time as compared with the serial calculations.
> Can someone guide me how to improve the parallel QE calculations.
> Thanks,
> Gul

--
Mr. Filippo SPIGA, M.Sc.
Quantum ESPRESSO Foundation
http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Problems in gipaw EFG calculations

2015-03-13 Thread Filippo Spiga
On Mar 9, 2015, at 6:05 PM, Paolo Giannozzi <paolo.gianno...@uniud.it> wrote:
> Just in case: try to replace the automatic array
>  complex(dp) :: psic(dffts%nnr)
> with an allocatable array. Sometimes large automatic arrays fill the
> stack and cause strange crash.

Or try:

ulimit -s unlimited

But make psic allocatable is indeed a more elegant solution.

F

--
Mr. Filippo SPIGA, M.Sc.
http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Build QE for Intel Phi native run

2015-02-06 Thread Filippo Spiga
Dear Hideaki,

can you share information about compiler versions, flags, changes in the 
make.sys so we can add in the regular distribution instructions how to get 
up on Intel Phi? We will acknowledge you for the help :-)

Thanks in advance.

Regards,
Filippo

On Feb 6, 2015, at 8:18 AM, Hideaki Kuraishi <hideaki.kurai...@uk.fujitsu.com> 
wrote:
> Hi Filippo,
> 
> Thanks to the comment below,
> I now able to compile and run the code.
> Thanks a lot for the advice!
> 
> Best regards
> Kai
> 
>> -Original Message-
>> From: pw_forum-boun...@pwscf.org [mailto:pw_forum-boun...@pwscf.org]
>> On Behalf Of Filippo Spiga
>> Sent: Friday, February 06, 2015 4:03 PM
>> To: PWSCF Forum
>> Cc: <q-e-develop...@qe-forge.org>
>> Subject: Re: [Pw_forum] Build QE for Intel Phi native run
>> 
>> Dear Hideaki,
>> 
>> please keep us posted if you succeed to run or not so we can internally
>> track the problem and notify future users if a bug exist.
>> 
>> Thanks,
>> Filippo
>> 
>> On Jan 26, 2015, at 8:16 AM, Hideaki Kuraishi
>> <hideaki.kurai...@uk.fujitsu.com> wrote:
>>> Dear Filippo, Fabio
>>> 
>>> Thank you for the comment.
>>> Yes, core file is generated and will analyze this.
>>> 
>>> So far I have tried your suggestions but situation didn’t change and
>>> the same error occurred. Even when I used older versions of Intel
>>> compiler, v13 and 14, the error was reproduced. I will try this on more
>> using another system this week.
>>> 
>>> Best regards
>>> Kai
>>> 
>>> 
>>> From: pw_forum-boun...@pwscf.org [mailto:pw_forum-boun...@pwscf.org]
>>> On Behalf Of Filippo Spiga
>>> Sent: Sunday, January 25, 2015 11:07 PM
>>> To: PWSCF Forum
>>> Cc: <q-e-develop...@qe-forge.org>
>>> Subject: Re: [Pw_forum] Build QE for Intel Phi native run
>>> 
>>> Dear Hideaki,
>>> ,
>>> Fabio Affinito can advise for the Intel Phi part however I do suggest
>> to run configure in this way:
>>> 
>>> ./configure --enable-openmp --enable-parallel --with-scalapack=intel
>>> 
>>> and only after edit the make.sys to eventually modify the libraries
>> patch and names. The configure should be able to pickup MKL and ScaLAPACK
>> if the environment is set correctly.
>>> 
>>> The problem you see is additional to the one I noticed while ago with
>> the latest 15.x Intel compiler. If you can systematically reproduce this
>> compiler seg fault then it is worth to file a bug report to Intel. It
>> mention a core file... do you have one in the PW/src directory?
>>> 
>>> F
>>> 
>>> On Jan 23, 2015, at 3:59 PM, Hideaki Kuraishi
>> <hideaki.kurai...@uk.fujitsu.com> wrote:
>>> 
>>> Hello,
>>> 
>>> I have been trying to build QE 5.0.2 with Intel Compiler 15.0.0 and
>> Intel MPI 5.0 to run in native mode. However, though configure is fine,
>> it fails at make phase with segmentation fault error as follows.
>>> 
>>> --
>>>   :
>>> mpiifort -openmp -o bands_FS.x bands_FS.o -L$MKLROOT/lib/mic
>>> -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 -liopm5
>>> $MKLROOT/lib/mic/libmkl_lapack95_lp64.a -liomp5
>>> $HOME/mkl/lib/libfftw3xf_intel.a -L$MKLROOT/lib/mic -lmkl_intel_lp64
>>> -liomp5
>>> 
>>> /opt/intel/impi/5.0.0.028/intel64/bin/mpiifort: line 729:  5630
>> Segmentation fault  (core dumped) $Show $FC $FCFLAGS
>> "${allargs[@]}" $FCMODDIRS $FCINCDIRS -L${libdir}${MPILIBDIR}
>> -L$libdir $rpath_opt $mpilibs $I_MPI_OTHERLIBS $LDFLAGS $MPI_OTHERLIBS
>> 
>>> 
>>> ( cd ../../bin ; ln -fs ../PW/tools/bands_FS.x . )
>>> 
>>> mpiifort -openmp -o kvecs_FS.x kvecs_FS.o -L$MKLROOT/lib/mic
>>> -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 -liopm5
>>> $MKLROOT/lib/mic/libmkl_lapack95_lp64.a -liomp5
>>> $HOME/mkl/lib/libfftw3xf_intel.a -L$MKLROOT/lib/mic -lmkl_intel_lp64
>>> -liomp5
>>> 
>>> /opt/intel/impi/5.0.0.028/intel64/bin/mpiifort: line 729:  5656
>> Segmentation fault  (core dumped) $Show $FC $FCFLAGS
>> "${allargs[@]}" $FCMODDIRS $FCINCDIRS -L${libdir}${MPILIBDIR}
>> -L$libdir $rpath_opt $mpilibs $I_MPI_OTHERLIBS $LDFLAGS $MPI_OTHERLIBS
>> 
>>> 
>>> ( cd ../../bin ; ln -fs ../PW/tools/kvecs_FS.x . )
>>>   :
>>> --
>>> 
>>> If

Re: [Pw_forum] installation.

2015-02-06 Thread Filippo Spiga
Dear Jiban,

this mailing-list is _NOT_ meant to provide user support to people that have 
problems with their HPC system, problem to compile MPI libraries or math 
libraries. Here the problem is how you are using Quantum ESPRESSO, not Quantum 
ESPRESSO itself.

Please address this type of problems to the people working at the HPC centre 
where you intend to run, those people are paid to provide such a service.

Cheers,
Filippo

On Feb 5, 2015, at 5:26 AM, Jiban Kangsabanik <jiban2...@gmail.com> wrote:
> Hi, 
>  After installing quantum espresso in a cluster by using commands 
> './configure' and 'make all' when I was running a calculation it was giving 
> the message like below- 
>  "WARNING: No preset parameters were found for the device that Open MPI
> detected:
> 
>   Local host:didymium.cmpgroup.ameslab.gov
>   Device name:   mlx4_0
>   Device vendor ID:  0x02c9
>   Device vendor part ID: 4099
> 
> Default device parameters will be used, which may result in lower
> performance.  You can edit any of the files specified by the
> btl_openib_device_param_files MCA parameter to set values for your
> device.
> 
> NOTE: You can turn off this warning by setting the MCA parameter
>   btl_openib_warn_no_device_params_found to 0."
> It was giving output but was taking approx. same time as my PC. pls. resolve 
> the issue. Thank you.
>   
>  sincerely,
>
> Jiban Kangsabanik
> Ph.D 
> student, Physics Dept.
>Indian 
> Institute of Technology Bombay
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Build QE for Intel Phi native run

2015-02-06 Thread Filippo Spiga
Dear Hideaki,

please keep us posted if you succeed to run or not so we can internally track 
the problem and notify future users if a bug exist.

Thanks,
Filippo

On Jan 26, 2015, at 8:16 AM, Hideaki Kuraishi <hideaki.kurai...@uk.fujitsu.com> 
wrote:
> Dear Filippo, Fabio
>  
> Thank you for the comment. 
> Yes, core file is generated and will analyze this.
>  
> So far I have tried your suggestions but situation didn’t change and the same 
> error
> occurred. Even when I used older versions of Intel compiler, v13 and 14,
> the error was reproduced. I will try this on more using another system this 
> week.
>  
> Best regards
> Kai
>  
>  
> From: pw_forum-boun...@pwscf.org [mailto:pw_forum-boun...@pwscf.org] On 
> Behalf Of Filippo Spiga
> Sent: Sunday, January 25, 2015 11:07 PM
> To: PWSCF Forum
> Cc: <q-e-develop...@qe-forge.org>
> Subject: Re: [Pw_forum] Build QE for Intel Phi native run
>  
> Dear Hideaki,
>  ,
> Fabio Affinito can advise for the Intel Phi part however I do suggest to run 
> configure in this way:
>  
> ./configure --enable-openmp --enable-parallel --with-scalapack=intel
>  
> and only after edit the make.sys to eventually modify the libraries patch and 
> names. The configure should be able to pickup MKL and ScaLAPACK if the 
> environment is set correctly.
>  
> The problem you see is additional to the one I noticed while ago with the 
> latest 15.x Intel compiler. If you can systematically reproduce this compiler 
> seg fault then it is worth to file a bug report to Intel. It mention a core 
> file... do you have one in the PW/src directory?
>  
> F
>  
> On Jan 23, 2015, at 3:59 PM, Hideaki Kuraishi 
> <hideaki.kurai...@uk.fujitsu.com> wrote:
> 
> Hello,
> 
> I have been trying to build QE 5.0.2 with Intel Compiler 15.0.0 and Intel MPI 
> 5.0 to run in native mode. However, though configure is fine, it fails at 
> make phase with segmentation fault error as follows.
> 
> --
>:
> mpiifort -openmp -o bands_FS.x bands_FS.o -L$MKLROOT/lib/mic 
> -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 -liopm5 
> $MKLROOT/lib/mic/libmkl_lapack95_lp64.a -liomp5 
> $HOME/mkl/lib/libfftw3xf_intel.a -L$MKLROOT/lib/mic -lmkl_intel_lp64 -liomp5
> 
> /opt/intel/impi/5.0.0.028/intel64/bin/mpiifort: line 729:  5630 Segmentation 
> fault  (core dumped) $Show $FC $FCFLAGS "${allargs[@]}" $FCMODDIRS 
> $FCINCDIRS -L${libdir}${MPILIBDIR} -L$libdir $rpath_opt $mpilibs 
> $I_MPI_OTHERLIBS $LDFLAGS $MPI_OTHERLIBS
> 
> ( cd ../../bin ; ln -fs ../PW/tools/bands_FS.x . )
> 
> mpiifort -openmp -o kvecs_FS.x kvecs_FS.o -L$MKLROOT/lib/mic 
> -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 -liopm5 
> $MKLROOT/lib/mic/libmkl_lapack95_lp64.a -liomp5 
> $HOME/mkl/lib/libfftw3xf_intel.a -L$MKLROOT/lib/mic -lmkl_intel_lp64 -liomp5
> 
> /opt/intel/impi/5.0.0.028/intel64/bin/mpiifort: line 729:  5656 Segmentation 
> fault  (core dumped) $Show $FC $FCFLAGS "${allargs[@]}" $FCMODDIRS 
> $FCINCDIRS -L${libdir}${MPILIBDIR} -L$libdir $rpath_opt $mpilibs 
> $I_MPI_OTHERLIBS $LDFLAGS $MPI_OTHERLIBS
> 
> ( cd ../../bin ; ln -fs ../PW/tools/kvecs_FS.x . )
>: 
> --
> 
> If someone has experienced the same error, could you please share the 
> information on how to solve this?
> 
> My procedure is...
> 
> (1) Configure
> export CC=mpicc
> export FC=mpiifort
> export F90=$FC
> export MPIF90=$FC
> export AR="xiar"
> export CFLAGS="-mmic -openmp"
> export FFLAGS=$CFLAGS
> export FCFLAGS=$CFLAGS
> export BLAS_LIBS="-L$MKLROOT/lib/mic -lmkl_intel_lp64 -liomp5" 
> export LAPACK_LIBS="$MKLROOT/lib/mic/libmkl_lapack95_lp64.a -liomp5" 
> export SCALAPACK_LIBS="-L$MKLROOT/lib/mic -lmkl_scalapack_lp64 
> -lmkl_blacs_intelmpi_lp64 -liopm5"
> 
> $ ./configure --enable-openmp --enable-parallel
> 
> (2) Build
> $ make pw 2>&1 | tee LOG.make
> 
> Best regards
> Kai
> 
> __
> 
> Fujitsu Laboratories of Europe Limited
> Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE
> Registered No. 4153469
> 
> This e-mail and any attachments are for the sole use of addressee(s) and
> may contain information which is privileged and confidential. Unauthorised
> use or copying for disclosure is strictly prohibited. The fact that this
> e-mail has been scanned by Trendmicro Interscan does not guarantee that 
> it has not been intercepted or amended nor that it is virus-free.
> 
> 
> ___
> Pw_fo

Re: [Pw_forum] install PW

2015-01-27 Thread Filippo Spiga
On Jan 25, 2015, at 8:16 PM, mars...@aut.ac.ir wrote:
> "./configure  --enable-parallel --with-internal-blas --with-internal-lapack 
> --with-scalapack

If you are looking for ScaLAPACK library it is reasonable to assume you have a 
better BLAS/LAPACK library installed in your system, better than the one 
included in QE. 

(1) ./configure  --enable-parallel --with-internal-blas --with-internal-lapack 
--without-scalapack
(2) ./configure  --enable-parallel --with-scalapack

If you are using intel MPI instead of Open MPI, please use 
--with-scalapack=intel


My personal feeling is that you can avoit lot of troubles by avoiding ScalaPACK 
at all if (2) fails in detecting a good BLAS/LAPACK installed in your system

HTH
F

--
Mr. Filippo SPIGA, M.Sc.
http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Build QE for Intel Phi native run

2015-01-25 Thread Filippo Spiga
Dear Hideaki,
 ,
Fabio Affinito can advise for the Intel Phi part however I do suggest to run 
configure in this way:

./configure --enable-openmp --enable-parallel --with-scalapack=intel

and only after edit the make.sys to eventually modify the libraries patch and 
names. The configure should be able to pickup MKL and ScaLAPACK if the 
environment is set correctly.

The problem you see is additional to the one I noticed while ago with the 
latest 15.x Intel compiler. If you can systematically reproduce this compiler 
seg fault then it is worth to file a bug report to Intel. It mention a core 
file... do you have one in the PW/src directory?

F

On Jan 23, 2015, at 3:59 PM, Hideaki Kuraishi <hideaki.kurai...@uk.fujitsu.com> 
wrote:
> Hello,
> 
> I have been trying to build QE 5.0.2 with Intel Compiler 15.0.0 and Intel MPI 
> 5.0 to run in native mode. However, though configure is fine, it fails at 
> make phase with segmentation fault error as follows.
> 
> --
>:
> mpiifort -openmp -o bands_FS.x bands_FS.o -L$MKLROOT/lib/mic 
> -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 -liopm5 
> $MKLROOT/lib/mic/libmkl_lapack95_lp64.a -liomp5 
> $HOME/mkl/lib/libfftw3xf_intel.a -L$MKLROOT/lib/mic -lmkl_intel_lp64 -liomp5
> 
> /opt/intel/impi/5.0.0.028/intel64/bin/mpiifort: line 729:  5630 Segmentation 
> fault  (core dumped) $Show $FC $FCFLAGS "${allargs[@]}" $FCMODDIRS 
> $FCINCDIRS -L${libdir}${MPILIBDIR} -L$libdir $rpath_opt $mpilibs 
> $I_MPI_OTHERLIBS $LDFLAGS $MPI_OTHERLIBS
> 
> ( cd ../../bin ; ln -fs ../PW/tools/bands_FS.x . )
> 
> mpiifort -openmp -o kvecs_FS.x kvecs_FS.o -L$MKLROOT/lib/mic 
> -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 -liopm5 
> $MKLROOT/lib/mic/libmkl_lapack95_lp64.a -liomp5 
> $HOME/mkl/lib/libfftw3xf_intel.a -L$MKLROOT/lib/mic -lmkl_intel_lp64 -liomp5
> 
> /opt/intel/impi/5.0.0.028/intel64/bin/mpiifort: line 729:  5656 Segmentation 
> fault  (core dumped) $Show $FC $FCFLAGS "${allargs[@]}" $FCMODDIRS 
> $FCINCDIRS -L${libdir}${MPILIBDIR} -L$libdir $rpath_opt $mpilibs 
> $I_MPI_OTHERLIBS $LDFLAGS $MPI_OTHERLIBS
> 
> ( cd ../../bin ; ln -fs ../PW/tools/kvecs_FS.x . )
>: 
> --
> 
> If someone has experienced the same error, could you please share the 
> information on how to solve this?
> 
> My procedure is...
> 
> (1) Configure
> export CC=mpicc
> export FC=mpiifort
> export F90=$FC
> export MPIF90=$FC
> export AR="xiar"
> export CFLAGS="-mmic -openmp"
> export FFLAGS=$CFLAGS
> export FCFLAGS=$CFLAGS
> export BLAS_LIBS="-L$MKLROOT/lib/mic -lmkl_intel_lp64 -liomp5" 
> export LAPACK_LIBS="$MKLROOT/lib/mic/libmkl_lapack95_lp64.a -liomp5" 
> export SCALAPACK_LIBS="-L$MKLROOT/lib/mic -lmkl_scalapack_lp64 
> -lmkl_blacs_intelmpi_lp64 -liopm5"
> 
> $ ./configure --enable-openmp --enable-parallel
> 
> (2) Build
> $ make pw 2>&1 | tee LOG.make
> 
> Best regards
> Kai
> 
> __
> 
> Fujitsu Laboratories of Europe Limited
> Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE
> Registered No. 4153469
> 
> This e-mail and any attachments are for the sole use of addressee(s) and
> may contain information which is privileged and confidential. Unauthorised
> use or copying for disclosure is strictly prohibited. The fact that this
> e-mail has been scanned by Trendmicro Interscan does not guarantee that 
> it has not been intercepted or amended nor that it is virus-free.
> 
> 
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] QE-GPU

2014-12-31 Thread Filippo Spiga
Dear Yelena,

On Dec 30, 2014, at 9:12 PM, yelena <yel...@ipb.ac.rs> wrote:
> Just to be sure, QE-GPU supports only up to 3.5 architecture?

QE fully supports 2.x and 3.x. Compute capability 1.3 may still work but there 
are not that much cards around, consider it deprecated. Compute Capability 3.5 
(K20, K40, K80) is the best.


> I've recently had opportunity to test some new nvidia cards (5.0 and 
> 5.2) but as I've tried to compile QE i've saw it is not supported.
> Is there any workaround? Patch maybe? I was so excited to test them 
> with QE :)

Maxwell architecture is not optimized for double precision (or FP64). If you 
really want to give it a try drop me an email tomorrow or Friday and I will let 
you know what files you have to change after run the configure (I am curious as 
well and I will be surprised if Maxwell is better than Kepler!) 

Have a nice New Year's eve and New Year's Day.

Cheers,
Filippo

--
Mr. Filippo SPIGA, M.Sc.
http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] QE-GPU compiling

2014-12-30 Thread Filippo Spiga
Mohammad,

is /usrl/local/lib64 in LD_LIBRARY_PATH? Did you specify the option 
"--with-cuda-dir=/usr/local" when you run configure from GPU folder?

This mailing-list is for problem related to Quantum ESPRESSO, not your linux 
installation. Make sure your environment is set correctly before proceed 
further. If you are not sure, google is your best way to figure it out.

F


> On Dec 30, 2014, at 6:07 AM, Mohamad Moadeli <mohammad.moadd...@gmail.com> 
> wrote:
> 
> Dear Filippo,
> 
> The graphic card driver (NVIDIA) had been installed. Then I followed these 
> steps to install CUDA:
> 
> chmod +x ./NVIDIA_CUDA_Toolkit_2.0_rhel5.1_x86_64.run*
> su -c './NVIDIA_CUDA_Toolkit_2.0_rhel5.1_x86_64.run*'
> 
> Now, there is a CUDA folder under /user/local ,  containing:
> bin  doc  include  lib  man  open64  src
> 
> I wonder what the problem is...
> 
> Thanks for your help,
> 
> regards,
> 
> Mohammad
> 
> On Mon, Dec 29, 2014 at 4:32 PM, Filippo Spiga <spiga.fili...@gmail.com> 
> wrote:
> Dear Mohamad,
> 
> did you have CUDA installed under /usr/local or /lib64 ? Anyway, libcuda*.so 
> are not found in LD_LIBRARY_PATH, be sure you pass the right location where 
> you installed the CUDA SDK.
> 
> HTH
> 
> Cheers,
> F
> 
> 
> > On Dec 29, 2014, at 11:18 AM, Mohamad Moadeli <mohammad.moadd...@gmail.com> 
> > wrote:
> >
> > Dear all,
> >
> > I am trying to compile QE-GPU (5.0.2). Here is the make.sys file:
> >
> > 
> > 
> > # make.sys.  Generated from make.sys.in by configure.
> >
> > # compilation rules
> >
> > .SUFFIXES :
> > .SUFFIXES : .o .c .f .f90 .cu
> >
> > # most fortran compilers can directly preprocess c-like directives: use
> > # $(MPIF90) $(F90FLAGS) -c $<
> > # if explicit preprocessing by the C preprocessor is needed, use:
> > # $(CPP) $(CPPFLAGS) $< -o $*.F90
> > #$(MPIF90) $(F90FLAGS) -c $*.F90 -o $*.o
> > # remember the tabulator in the first column !!!
> >
> > .f90.o:
> > $(MPIF90) $(F90FLAGS) -c $<
> >
> > # .f.o and .c.o: do not modify
> >
> > .f.o:
> > $(F77) $(FFLAGS) -c $<
> >
> > .c.o:
> > $(CC) $(CFLAGS)  -c $<
> >
> > # CUDA files
> > .cu.o:
> > $(NVCC) $(NVCCFLAGS) -I../../include $(IFLAGS) $(DFLAGS)   -c $<
> >
> > # topdir for linking espresso libs with plugins
> > TOPDIR = /usr/local/codes/espresso/espresso-5.0.2/GPU/../
> >
> >
> > # DFLAGS  = precompilation options (possible arguments to -D and -U)
> > #   used by the C compiler and preprocessor
> > # FDFLAGS = as DFLAGS, for the f90 compiler
> > # See include/defs.h.README for a list of options and their meaning
> > # With the exception of IBM xlf, FDFLAGS = $(DFLAGS)
> > # For IBM xlf, FDFLAGS is the same as DFLAGS with separating commas
> >
> > # MANUAL_DFLAGS  = additional precompilation option(s), if desired
> > #  You may use this instead of tweaking DFLAGS and FDFLAGS
> > #  BEWARE: will not work for IBM xlf! Manually edit FDFLAGS
> > MANUAL_DFLAGS  =
> > DFLAGS =  -D__INTEL -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK 
> > -D__CUDA -D__PHIGEMM $(MANUAL_DFLAGS)
> > FDFLAGS= $(DFLAGS)
> >
> > # IFLAGS = how to locate directories where files to be included are
> > # In most cases, IFLAGS = -I../include
> >
> > IFLAGS = -I../include  
> > -I/usr/local/codes/espresso/espresso-5.0.2/GPU/..//phiGEMM/include 
> > -I/include
> >
> > # MOD_FLAGS = flag used by f90 compiler to locate modules
> > # Each Makefile defines the list of needed modules in MODFLAGS
> >
> > MOD_FLAG  = -I
> >
> > # Compilers: fortran-90, fortran-77, C
> > # If a parallel compilation is desired, MPIF90 should be a fortran-90
> > # compiler that produces executables for parallel execution using MPI
> > # (such as for instance mpif90, mpf90, mpxlf90,...);
> > # otherwise, an ordinary fortran-90 compiler (f90, g95, xlf90, ifort,...)
> > # If you have a parallel machine but no suitable candidate for MPIF90,
> > # try to specify the directory containing "mpif.h" in IFLAGS
> > # and to specify the location of MPI libraries in MPI_LIBS
> >
> > MPIF90 = mpif90
> > #F90   = ifort
> > CC = icc
> > F77= ifort
> >
> > # C preprocessor and preprocessing flags - for explicit pre

Re: [Pw_forum] QE-GPU compiling

2014-12-29 Thread Filippo Spiga
lp64  -lmkl_sequential -lmkl_core
> BLAS_LIBS_SWITCH = external
> 
> # If you have nothing better, use the local copy :
> # LAPACK_LIBS = /your/path/to/espresso/lapack-3.2/lapack.a
> # LAPACK_LIBS_SWITCH = internal
> # For IBM machines with essl (-D__ESSL): load essl BEFORE lapack !
> # remember that LAPACK_LIBS precedes BLAS_LIBS in loading order
> 
> # CBLAS is used in case the C interface for BLAS is missing (i.e. ACML)
> CBLAS_ENABLED = 0
> 
> LAPACK_LIBS=  
> LAPACK_LIBS_SWITCH = external
> 
> ELPA_LIBS_SWITCH = disabled
> SCALAPACK_LIBS = -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64
> 
> # nothing needed here if the the internal copy of FFTW is compiled
> # (needs -D__FFTW in DFLAGS)
> 
> FFT_LIBS   =  -lfftw3 
> 
> # For parallel execution, the correct path to MPI libraries must
> # be specified in MPI_LIBS (except for IBM if you use mpxlf)
> 
> MPI_LIBS   = 
> 
> # IBM-specific: MASS libraries, if available and if -D__MASS is defined in 
> FDFLAGS
> 
> MASS_LIBS  = 
> 
> # ar command and flags - for most architectures: AR = ar, ARFLAGS = ruv
> 
> AR = ar
> ARFLAGS= ruv
> 
> # ranlib command. If ranlib is not needed (it isn't in most cases) use
> # RANLIB = echo
> 
> RANLIB = ranlib
> 
> # all internal and external libraries - do not modify
> 
> FLIB_TARGETS   = all
> 
> # CUDA section
> NVCC = 
> NVCCFLAGS= -O3 -gencode arch=compute_35,code=sm_35 
> 
> PHIGEMM_INTERNAL = 1
> PHIGEMM_SYMBOLS  = 1
> MAGMA_INTERNAL   = 0
> 
> LIBOBJS= ../flib/ptools.a ../flib/flib.a ../clib/clib.a 
> ../iotk/src/libiotk.a 
> LIBS   = $(SCALAPACK_LIBS) $(LAPACK_LIBS) $(FFT_LIBS) $(BLAS_LIBS) 
> $(MPI_LIBS) $(MASS_LIBS) $(LD_LIBS)
> 
> # wget or curl - useful to download from network
> WGET = wget -O
> 
> =
> =
> 
> The following error occurs:
> 
> =
> make[3]: Entering directory 
> `/usr/local/codes/espresso/espresso-5.0.2/S3DE/iotk/src'
> make[3]: Nothing to be done for `loclib_only'.
> make[3]: Leaving directory 
> `/usr/local/codes/espresso/espresso-5.0.2/S3DE/iotk/src'
> mpif90 -static-intel  -o iotk_print_kinds.x iotk_print_kinds.o libiotk.a   
> -L/lib64 -lcublas  -lcufft -lcudart 
> ld: cannot find -lcublas
> ld: cannot find -lcufft
> ld: cannot find -lcudart
> make[2]: *** [iotk_print_kinds.x] Error 1
> make[2]: Leaving directory 
> `/usr/local/codes/espresso/espresso-5.0.2/S3DE/iotk/src'
> make[1]: *** [libiotk] Error 2
> make[1]: Leaving directory `/usr/local/codes/espresso/espresso-5.0.2/install'
> make: *** [libiotk] Error 2
> ==
> 
> Any suggestion will be highly aprreciated.Thank you in advance.
> 
> Mohammad Moaddeli
> PhD student,
> Shahid Chamran University of Ahvaz
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Help with QE compilation

2014-11-27 Thread Filippo Spiga
Elliot, try the following simple way to configure

./configure --enable-parallel --disable-openmp --without-scalapack
make all

Every time you run using 4 MPI you should set "-ndiag 4" as parameter of pw.x 
(or cp.x or other executables). "make install" & "--prefix" do not work 
perfectly in QE-5.0.2, they have been fixed only recently. You can avoid the 
FFTW3 library, QE will use its internal driver and for a quad-core system is 
more than enough.

HTH
F


> On Nov 25, 2014, at 12:13 PM, Elliot Menkah <elliotsmen...@yahoo.com> wrote:
> 
> 
> Signed PGP part
> Hello Everyone,
> 
> 
> I've compiled a parallel version of QE-5.0.2 on a quad-core workstation
> along with other dependencies such as openmpi-1.8.1 and fft-3.3.4 but
> when i run my calculations with it, I seem not to get the optimized
> computing power and efficiency as I expect.
> 
> The time line and factor for the same jobs is no different from when I
> run them with a serial version.
> 
> Could it be that I didn't compile it well or some dependencies are missing?
> 
> Can anyone please help me out.
> 
> Below is how I compiled the packages.
> -
> 
> #openmpi
> -
> ./configure --prefix=3D/usr
> make all
> make install
> -
> 
> 
> 
> #Configuring fftw
> -
> ./configure CC=3Dmpicc FC=3Dmpif90
> make
> make install
> -
> 
> 
> #Quantum espresso 5.0.2
> -
> ./configure --with-internal-blas --with-internal-lapack --enable-openmp
> --enable-parallel FFT_LIBS=3D/usr/lib/libfftw3.a --with-scalapack 2>&1|
> tee make_out3
>   =20
> make all 2>&1 | tee make_out4
> 
> 
> Thank you,
> 
> Kind Regards
> Elliot
> 
> 
> --
> Elliot S. Menkah
> Research Student - Computational Chemistry/ Computational Material Science
> Theoretical and Computational Chemistry
> Dept. of Chemistry
> Kwame Nkrumah UNiversity of Sci. and Tech.
> Kumasi
> Ghana
> 
> Tel: +233 243-055-717
> 
> Alt Email: elliotsmen...@gmail.com
>elliotsmen...@hotmail.com
> 
> 
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] [QE-GPU] something

2014-11-12 Thread Filippo Spiga
Dear Wenzhong,

you are probably using an old python. Try 2.4.x or later, it should work.

F

> On Nov 12, 2014, at 12:53 PM, wz30...@mail.ustc.edu.cn wrote:
> 
> 
> Hi all,
> > I tried to install QE-GPU with the guide file but error happened like
> > this:
> > Traceback (most recent call last):
> > 
> >   File "./GPU/scripts/addPhigemmSymbs.py", line 69, in ?
> > 
> > exit()
> > 
> > TypeError: 'str' object is not callable
> > 
> > So what should I do?
> Best,
> 
> Wenzhong
> 
> --
> *
> 王文忠
> Undergraduate Student
> School of Earth and Space Sciences
> University of Science and Technology of China
> Hefei, Anhui Province, PR China
> 
> Email:wangwenzhong30...@gmail.com  
>  wz30...@mail.ustc.edu.cn
> *
> 
> ___________
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Quantum Espresso Installation Error Message - Undefined reference.

2014-11-01 Thread Filippo Spiga
The correct wrapper in your case is probably mpiifort, not mpif90. 

Please do

./configure MPIF90=mpiifort F90=ifort --enable-parallel --disable-openmp 
--with-scalapak=intel

Do not specify anything more when you run configure. If Intel compilers are 
installed correctly the configure will pick the right stuff automatically

HTH
F



> On Oct 31, 2014, at 9:24 AM, HPC SUPPORT <hpcana...@gmail.com> wrote:
> 
> Hi Filippo,
> 
> Thanks for the update.
> 
> 1)Here I have specified intel Library only.
> BLAS_LIBS="-lmkl_intel_lp64 -lmkl_sequential -lmkl_core"   
> SCALAPACK_LIBS="-lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64" 
> FFT_LIBS="-lmkl_intel_lp64 -lmkl_sequential -lmkl_core"
> 
> 2)For MPI also, I am using intelmpi Not openmpi.
> ]# mpif90 --show
> /usr/lib/gcc/x86_64-redhat-linux/4.4.6/libgfortranbegin.a(fmain.o): In 
> function `main':
> (.text+0x26): undefined reference to `MAIN__'
> 
> But, still I am facing problem,
> I don't know where I am lacking.
> Could you help me to solve this problem?
> 
> Thanks & Regards
> Jaikumar S
> 
> 
> On Thu, Oct 30, 2014 at 9:52 PM, Filippo Spiga <spiga.fili...@gmail.com> 
> wrote:
> Hi JaiKumar,
> 
> as pointed out by other people you need to check your compiler and MPI 
> library. Something is mess up there!
> 
> Two suggestions:
> 1) if you are using Intel MPI then do "--with-scalapack=intel" instead of 
> using the default (the default enables ScaLAPACK and uses Open MPI).
> 2) if you are using Open MPI then please check it has been compiled against 
> Intel compilers and not GNU. You can check this by executing "mpicc -show" or 
> " mpif90 -show"
> 
> After you assessed these two points the way to solve your problem it is 
> simple: keep everything consistent.
> 
> HTH
> F
> 
> 
> > On Oct 28, 2014, at 2:02 PM, HPC SUPPORT <hpcana...@gmail.com> wrote:
> >
> > Dear All,
> >
> > While compiling Quantum Espresso version 5.1, we were getting undefined
> > reference symbol problem like __intel_sse2_strcpy this symbol got resolved
> > by adding the following library flags -L/app/l_ics_2012/lib/intel64/ 
> > -liompprof5
> > -L/app/l_ics_2012/lib/intel64/ -liomp5.
> > But after add it also we were still getting the same error message.
> >
> > so,Could you help us to solve this problem.
> >
> > # ./configure --prefix=/app/espresso-501 CFLAGS=$FCFLAGS
> > BLAS_LIBS="-lmkl_intel_lp64 -lmkl_sequential -lmkl_core"
> > SCALAPACK_LIBS="-lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64"
> > FFT_LIBS="-lmkl_intel_lp64 -lmkl_sequential -lmkl_core"
> > LIBS="-L/app/l_ics_2012/lib/intel64/ -liompprof5 
> > -L/app/l_ics_2012/lib/intel64/
> > -liomp5"
> >
> > mpif90 -g -pthread -o pw.x \
> >pwscf.o  libpw.a ../../Modules/libqemod.a ../../flib/ptools.a
> > ../../flib/flib.a ../../clib/clib.a ../../iotk/src/libiotk.a
> > -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64 -lmkl_intel_lp64
> > -lmkl_sequential -lmkl_core -lfftw3 -lmkl_intel_lp64 -lmkl_sequential
> > -lmkl_core -lmkl_intel_lp64 -lmkl_sequential -lmkl_core
> > ../../clib/clib.a(eval_infix.o): In function `GetNextToken':
> > eval_infix.c:(.text+0x589): undefined reference to `__intel_sse2_strcpy'
> > eval_infix.c:(.text+0x6c5): undefined reference to `__intel_sse2_strcpy'
> > eval_infix.c:(.text+0x925): undefined reference to `__intel_sse2_strcpy'
> > ../../clib/clib.a(eval_infix.o): In function `eval_infix':
> > eval_infix.c:(.text+0xa25): undefined reference to `_intel_fast_memset'
> > eval_infix.c:(.text+0xa4f): undefined reference to `_intel_fast_memcpy'
> > eval_infix.c:(.text+0xa6e): undefined reference to `_intel_fast_memset'
> > ../../clib/clib.a(eval_infix.o): In function `EvalInfix':
> > eval_infix.c:(.text+0xece): undefined reference to `__intel_sse2_strcpy'
> > eval_infix.c:(.text+0x1006): undefined reference to `__intel_sse2_strcpy'
> > ../../clib/clib.a(md5_from_file.o): In function `readFile':
> > md5_from_file.c:(.text+0x56): undefined reference to `_intel_fast_memset'
> > ../../clib/clib.a(md5_from_file.o): In function `get_md5':
> > md5_from_file.c:(.text+0x125): undefined reference to `_intel_fast_memset'
> > md5_from_file.c:(.text+0x16c): undefined reference to `__intel_sse2_strlen'
> > ../../clib/clib.a(md5.o): In function `md5_append':
> > md5.c:(.text+0x80): undefined reference to `_intel_fast_memcpy'
> > md5.c:(.text+0xf2): undefined reference to `_intel_fast_memcpy'
> > ../../clib/clib.a(md5.o): In function `md5_finish':
> > md5.c:(.text+0xe73): undef

  1   2   3   >