Re: [Pw_forum] QE 6.0 slower than 5.4 ??

2016-11-23 Thread nicola varini
Hi Francesco, in order to have performance reproducibility be sure to:
-disable hyperthreading
-disable node health check mechanism
In both cases I experienced a slowdown up to a factor 2.
You also mention that you see slowdown while using threads.
The system you mention looks way to small to experience any significant
benefit from MPI+OpenMP execution.

BR,


Nicola


On 11/23/2016 12:09 PM, Francesco Pelizza wrote:
> Hi Dear community,
>
>
> I have a question...Since the qe 6.0 release i started to use it, and I
> noticed a slow down for systems up to 48 atoms / 100 electrons running
> on few cores, and a speed up running upon more cores.
>
> I other words, taking as example an insulator polymer, set in its
> lattice with 96 electrons:
>
> using qe 5.4 on 8 threads takes 25-35% less time than qe 6.0
>
> that's generally true from scf, to vc-relax to bands and phonon or
> whatever calculations
>
> if I scale on servers or HPC I do not see slow down, and perhaps the qe
> 6.0 is in the average 10-15% faster.
>
>
> Was it expected to be so?
>
> Something changed in the way the system is fragmented across threads?
>
>
> BW
>
> Francesco Pelizza
>
> Strathclyde University
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

-- 
Nicola Varini, PhD

Scientific IT and Application Support (SCITAS)
Theory and simulation of materials (THEOS)
ME B2 464 (Bâtiment ME)
Station 1
CH-1015 Lausanne
+41 21 69 31332
http://scitas.epfl.ch

Nicola Varini

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] [Pw_Forum] Parallel execution and configuration issues

2016-09-20 Thread nicola varini
, but sadly I am the administrator!
>> then look for somebody with experience in linux clusters. you
>> definitely want somebody local to help you, that can look over your
>> shoulder while you are doing things and explain stuff. this is really
>> tricky to do over e-mail. these days it should not be difficult to
>> find somebody, as using linux in parallel is ubiquitous in most
>> computational science groups.
>>
>> there also is a *gazillion* of material available online including
>> talk slides from workshops and courses and online courses and
>> self-study material. HPC university has an aggregator for such
>> material at:
>> http://hpcuniversity.org/trainingMaterials/
>>
>> axel.
>>
>>> Could some one with a similar hardware configuration could comment here how
>>> to achieve properly working cluster?
>>>
>>> Sincerely yours
>>>
>>> Konrad Gruszka
>>>
>>>
>>> ___
>>> Pw_forum mailing list
>>> Pw_forum@pwscf.org
>>> http://pwscf.org/mailman/listinfo/pw_forum
>>

-- 
Nicola Varini, PhD

Scientific IT and Application Support (SCITAS)
Theory and simulation of materials (THEOS)
CE 0 813 (Bâtiment CE)
Station 1
CH-1015 Lausanne
http://scitas.epfl.ch

Nicola Varini

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Availability beta version of Quantum ESPRESSO v6.0

2016-08-30 Thread nicola varini

Hi Ye, according to this page:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53945
it seems that even the 4.8.something is not happy with that kind of syntax.
According to the GNU guys the 4.9 should fix the issue.
I'll let you know how this progress.


Nicola


On 29.08.2016 16:01, Ye Luo wrote:

Hi Filippo,

I just noticed that src/ELPA is added in my building command line even 
though it is not included in the make.inc and I'm not using ELPA. The 
compiler complains about the non-existing path.


I have tried the experimental HDF5 support in QE 6.0. It is great to 
have hdf5 support.
1, Some check could be added in the configure. Due to the mod file 
incompatible between compilers. I noticed most machines have installed 
hdf5 library build with GNU. When I build QE Intel compilers, the 
compiler complains.
2, I tried to build QE with GNU 4.8.4. I need to uncomment "USE, 
intrinsic :: ISO_C_binding" first but then the compiler still 
complains about C_LOC. Unsolved.
3, I also tried it with GNU 5.4 and hdf5 1.8.16. The two mpio routines 
can't be found in the library at the linking. Adding "USE h5fdmpio" in 
hdf_qe.90 solves the issue. Maybe there is an interface change in more 
recent hdf5.

I finally got the hdf5 working well in my test runs. Viva!
I have CCd Nicola who develops the hdf5 support, he can probably 
investigate the issues and also answer the following question.
I read the content of the h5 files. The wave function part, it has 
been well organized in kpoints and bands. however in the charge 
density h5, its gvectors are still in a .dat file and the content of 
the h5 seems not human readable. Do you have plans to improve them?


Thanks to every one. It's great to have a major release of QE.

Best regards,
Ye

===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory

2016-08-28 16:51 GMT-05:00 Filippo SPIGA 
<mailto:filippo.sp...@quantum-espresso.org>>:


Dear all,

the Quantum ESPRESSO Development Team is pleased to release a beta
version of Quantum ESPRESSO v6.0.

We decided to disclose a beta release to collect from our user
community as many feedback as possible and capture as many bugs as
possible in advance. We will do our best to fix on time all
isssues before the production release. The v6.0 is planned by end
of September. The "6.beta" archive can be downloaded from QE-FORGE:


http://qeforge.qe-forge.org/gf/project/q-e/frs/?action=FrsReleaseView&release_id=219

<http://qeforge.qe-forge.org/gf/project/q-e/frs/?action=FrsReleaseView&release_id=219>


An important note: this is *not* a productionrelease so there may
be issues and *not* all third-party packages are supported and
available at this stage. After the beta period this archive will
be removed.

We appreciate and value your feedback, PLEASE download and try it.
We look forward to hearing from you.

Happy Computing

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~
http://www.quantum-espresso.org <http://www.quantum-espresso.org>


___
Pw_forum mailing list
Pw_forum@pwscf.org <mailto:Pw_forum@pwscf.org>
http://pwscf.org/mailman/listinfo/pw_forum
<http://pwscf.org/mailman/listinfo/pw_forum>




--
Nicola Varini, PhD

Scientific IT and Application Support (SCITAS)
Theory and simulation of materials (THEOS)
CE 0 813 (Bâtiment CE)
Station 1
CH-1015 Lausanne
http://scitas.epfl.ch

Nicola Varini

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Availability beta version of Quantum ESPRESSO v6.0

2016-08-29 Thread nicola varini

Dear Ye, thanks to contribute with your feedback.
1. Yes it could be done. Maybe Filippo has in mind a clear way to do this.
Locally we started to use spack (https://github.com/LLNL/spack) to 
install software on the cluster.
In theory, spack should be able to solve the problem of dependencies 
quite seamless.

2. I developed mainly with Intel, but I'll try GCC a soon as I can.

On the AUSURF112 benchmark my output is:
[nvarini@deneb2 AUSURF112]$ ls -lrtah tempdir/gk.hdf5
-rw-r--r-- 1 nvarini scitas-ge 3.0M 29. Aug 16:19 tempdir/gk.hdf5
[nvarini@deneb2 AUSURF112]$ ls -lrtah tempdir/ausurf.save/
Au.pbe-nd-van.UPFdata-file.xmlK1/
charge-density.hdf5  gvectors.dat K2/

Is the file gk.hdf5 what you are looking for? Is it created in your 
output directory?


Nicola



On 29.08.2016 16:01, Ye Luo wrote:

Hi Filippo,

I just noticed that src/ELPA is added in my building command line even 
though it is not included in the make.inc and I'm not using ELPA. The 
compiler complains about the non-existing path.


I have tried the experimental HDF5 support in QE 6.0. It is great to 
have hdf5 support.
1, Some check could be added in the configure. Due to the mod file 
incompatible between compilers. I noticed most machines have installed 
hdf5 library build with GNU. When I build QE Intel compilers, the 
compiler complains.
2, I tried to build QE with GNU 4.8.4. I need to uncomment "USE, 
intrinsic :: ISO_C_binding" first but then the compiler still 
complains about C_LOC. Unsolved.
3, I also tried it with GNU 5.4 and hdf5 1.8.16. The two mpio routines 
can't be found in the library at the linking. Adding "USE h5fdmpio" in 
hdf_qe.90 solves the issue. Maybe there is an interface change in more 
recent hdf5.

I finally got the hdf5 working well in my test runs. Viva!
I have CCd Nicola who develops the hdf5 support, he can probably 
investigate the issues and also answer the following question.
I read the content of the h5 files. The wave function part, it has 
been well organized in kpoints and bands. however in the charge 
density h5, its gvectors are still in a .dat file and the content of 
the h5 seems not human readable. Do you have plans to improve them?


Thanks to every one. It's great to have a major release of QE.

Best regards,
Ye

===
Ye Luo, Ph.D.
Leadership Computing Facility
Argonne National Laboratory

2016-08-28 16:51 GMT-05:00 Filippo SPIGA 
<mailto:filippo.sp...@quantum-espresso.org>>:


Dear all,

the Quantum ESPRESSO Development Team is pleased to release a beta
version of Quantum ESPRESSO v6.0.

We decided to disclose a beta release to collect from our user
community as many feedback as possible and capture as many bugs as
possible in advance. We will do our best to fix on time all
isssues before the production release. The v6.0 is planned by end
of September. The "6.beta" archive can be downloaded from QE-FORGE:


http://qeforge.qe-forge.org/gf/project/q-e/frs/?action=FrsReleaseView&release_id=219

<http://qeforge.qe-forge.org/gf/project/q-e/frs/?action=FrsReleaseView&release_id=219>


An important note: this is *not* a productionrelease so there may
be issues and *not* all third-party packages are supported and
available at this stage. After the beta period this archive will
be removed.

We appreciate and value your feedback, PLEASE download and try it.
We look forward to hearing from you.

Happy Computing

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~
http://www.quantum-espresso.org <http://www.quantum-espresso.org>


___
Pw_forum mailing list
Pw_forum@pwscf.org <mailto:Pw_forum@pwscf.org>
http://pwscf.org/mailman/listinfo/pw_forum
<http://pwscf.org/mailman/listinfo/pw_forum>




--
Nicola Varini, PhD

Scientific IT and Application Support (SCITAS)
Theory and simulation of materials (THEOS)
CE 0 813 (Bâtiment CE)
Station 1
CH-1015 Lausanne
http://scitas.epfl.ch

Nicola Varini

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Trouble linking fortran to c on Cray XC-40

2016-06-22 Thread nicola varini

Dear Andrew, this should work on XC-40 with quantum-espresso-5.4.0.
Here I used intel environment: module swap PrgEnv-cray PrgEnv-intel and 
scalapack.

Generally, on CRAY machines you need to use ftn for fortran and cc for C.
Those are wrappers that call the environment compiler (pgi,cray,intel or 
gnu)

under the hood.

Let me know if you experience any problems.

Nicola


On 06/21/2016 06:22 PM, Downs, Andrew S CTR USARMY ARL (US) wrote:

Hello All,

I'm building from scratch qe 5.4.0 for a user of mine, and I'm trying to get it 
to compile with the Cray compiler, not intel, gnu or pgi.

At the ./configure stage, the problem I run into is that some nested part of 
the configure script seems to think I'm using gfortran when I'm not.

I set:
FC=ftn
F77=ftn
F90=ftn
CC=cc
CXX=CC

Somehow it's still finding gfortran it actually tells me that ftn has been 
discarded in favor of gfortran.

Trying to dig around on my own...I looked into the configure script that gets 
called by the main configure script and did a 'replace all' on 'gfortran' with 
'ftn'
Now it stops picking up gfortran, even though I see "checking whether we are using 
the GNU Fortran 77 compiler... yes" in the configure output, but I still run into 
the same linking error.

Any suggestions would be fantastic.  My apologies if this email is too detailed 
or not detailed enough.

-Andrew Downs, ARL HPC Specialist


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


--
Nicola Varini, PhD

Scientific IT and Application Support (SCITAS)
Theory and simulation of materials (THEOS)
ME B2 464 (Bâtiment ME)
Station 1
CH-1015 Lausanne
+41 21 69 31332
http://scitas.epfl.ch

Nicola Varini

# make.sys.  Generated from make.sys.in by configure.

# compilation rules

.SUFFIXES :
.SUFFIXES : .o .c .f .f90

# most fortran compilers can directly preprocess c-like directives: use
#   $(MPIF90) $(F90FLAGS) -c $<
# if explicit preprocessing by the C preprocessor is needed, use:
#   $(CPP) $(CPPFLAGS) $< -o $*.F90
#   $(MPIF90) $(F90FLAGS) -c $*.F90 -o $*.o
# remember the tabulator in the first column !!!

.f90.o:
$(MPIF90) $(F90FLAGS) -c $<

# .f.o and .c.o: do not modify

.f.o:
$(F77) $(FFLAGS) -c $<

.c.o:
$(CC) $(CFLAGS)  -c $<



# Top QE directory, not used in QE but useful for linking QE libs with plugins
# The following syntax should always point to TOPDIR:
#   $(dir $(abspath $(filter %make.sys,$(MAKEFILE_LIST

TOPDIR = /users/nvarini/espresso-5.4.0

# DFLAGS  = precompilation options (possible arguments to -D and -U)
#   used by the C compiler and preprocessor
# FDFLAGS = as DFLAGS, for the f90 compiler
# See include/defs.h.README for a list of options and their meaning
# With the exception of IBM xlf, FDFLAGS = $(DFLAGS)
# For IBM xlf, FDFLAGS is the same as DFLAGS with separating commas

# MANUAL_DFLAGS  = additional precompilation option(s), if desired
#  BEWARE: it does not work for IBM xlf! Manually edit FDFLAGS
MANUAL_DFLAGS  =
DFLAGS =  -D__INTEL -D__DFTI -D__MPI -D__PARA -D__SCALAPACK
FDFLAGS= $(DFLAGS) $(MANUAL_DFLAGS)

# IFLAGS = how to locate directories with *.h or *.f90 file to be included
#  typically -I../include -I/some/other/directory/
#  the latter contains .e.g. files needed by FFT libraries

IFLAGS = -I../include -I${MKLROOT}/include

# MOD_FLAGS = flag used by f90 compiler to locate modules
# Each Makefile defines the list of needed modules in MODFLAGS

MOD_FLAG  = -I

# Compilers: fortran-90, fortran-77, C
# If a parallel compilation is desired, MPIF90 should be a fortran-90
# compiler that produces executables for parallel execution using MPI
# (such as for instance mpif90, mpf90, mpxlf90,...);
# otherwise, an ordinary fortran-90 compiler (f90, g95, xlf90, ifort,...)
# If you have a parallel machine but no suitable candidate for MPIF90,
# try to specify the directory containing "mpif.h" in IFLAGS
# and to specify the location of MPI libraries in MPI_LIBS

MPIF90 = ftn
#F90   = ftn
CC = icc
F77= ftn

# C preprocessor and preprocessing flags - for explicit preprocessing,
# if needed (see the compilation rules above)
# preprocessing flags must include DFLAGS and IFLAGS

CPP= cpp
CPPFLAGS   = -P -C -traditional $(DFLAGS) $(IFLAGS)

# compiler flags: C, F90, F77
# C flags must include DFLAGS and IFLAGS
# F90 flags must include MODFLAGS, IFLAGS, and FDFLAGS with appropriate syntax

CFLAGS = -O3 $(DFLAGS) $(IFLAGS)
F90FLAGS   = $(FFLAGS) -nomodule -fpp $(FDFLAGS) $(IFLAGS) $(MODFLAGS)
FFLAGS = -O2 -assume byterecl -g -traceback

# compiler flags without optimization for fortran-77
# the latter is NEEDED to properly compile dlamch.f, used by lapack

FFLAGS_NOOPT   = -O0 -assume byterecl -g

Re: [Pw_forum] Program received signal SIGSEGV: Segmentation fault - invalid memory reference.

2016-02-01 Thread nicola varini

Dear Muhammad,
I compiled qe-5.3 with gfortran-4.8, mpich-3.04, gcc-4.8, and fftw-3.3.4

I was able to run your script without problem.
My suspect is that you've some problems with fft interface.
Did you specify -D__FFTW3 in the DFLAGS?
How do you link the fftw library?

Nicola



On 01/31/2016 04:35 PM, Muhammad Zafar wrote:

Dear all
I am facing some strange problem about all versions from 5.0 to 5.3.0 
of Quantum Espresso. I serached archievs at pwscf forum but could not 
solve this problem.
I am using Fedora GNU Fortran (GCC) 4.8.3 20140911 (Red Hat 
4.8.3-7).This error also appears for parallel and serial executions.

The error is

iteration #  1 ecut=30.00 Ry beta=0.30
 Davidson diagonalization with overlap
 ethr =  1.00E-02,  avg # of iterations =  3.3

Program received signal SIGSEGV: Segmentation fault - invalid memory 
reference.


Backtrace for this error:
#0  0x411ECDD3
#1  0x411ED498
#2  0x43FF
#3  0x410C39A9
#4  0x4101A844
#5  0x4101A964
#6  0x41018163
#7  0x4101D7CA
#8  0x4101D283
#9  0x4101D7CA
#10  0x4101D283
#11  0x410BDCBB
#12  0x823ED22 in __fft_scalar_MOD_cfft3d at fft_scalar.f90:1280
#13  0x823CA91 in fwfft_x_ at fft_interfaces.f90:222
#14  0x81E5CE3 in interpolate_ at interpolate.f90:75
#15  0x815372B in sum_band_ at sum_band.f90:142
#16  0x8088D55 in electrons_scf_ at electrons.f90:486
#17  0x808E586 in electrons_ at electrons.f90:132
#18  0x804D804 in run_pwscf_ at run_pwscf.f90:90
#19  0x804D537 in pwscf at pwscf.f90:30
#20  0x48ED8962
*
*
*I perform calculations for different lattice constants from 10.1170 
to 11.7170 with difference of 0.1 ( i.e 10.2170 10.3170 10.4170 
10.5170 10.6170 10.7170...and so on). Above error appear for 10.5170, 
106170, 10.8170,  and upto 11.2170  ) the rest values executes without 
any error.*

 The input file is

 &CONTROL
calculation='scf',
prefix ='VSe'
tstress=.t.,tprnfor=.t.,
outdir='/home/zafar/Pwscf/scratch',
pseudo_dir='/home/zafar/pslibrary.1.0.0/pbe/PSEUDOPOTENTIALS'
verbosity='high'
/
&SYSTEM
ibrav= 2  ,
celldm(1)= 10.5170,
nat= 2, ntyp= 2,
ecutwfc=30.0,
ecutrho = 200.0
/
&ELECTRONS
mixing_mode = 'plain'
 diagonalization='david'
mixing_beta = 0.3
conv_thr =  1.0d-8
/
ATOMIC_SPECIES
 Zn   65.409  Zn.pbe-nc.UPF
 Se   78.960  Se.pbe-n-nc.UPF
ATOMIC_POSITIONS crystal
 Zn  0.000  0.000  0.000
 Se  0.250  0.250  0.250
K_POINTS {automatic}
9 9 9 0 0 0

Hope that some one will help me..

Muhammad Zafar
PhD Scholar
Department of Physics
The Islamia University of Bahawalpur,Pakistan


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


--
Nicola Varini, PhD

Scientific IT and Application Support (SCITAS)
Theory and simulation of materials (THEOS)
ME B2 464 (Bâtiment ME)
Station 1
CH-1015 Lausanne
+41 21 69 31332
http://scitas.epfl.ch

Nicola Varini

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Segmentation Fault (11)

2015-10-28 Thread nicola varini
Hi, from your problem below I see that you are using limkl_avx in your 
linking.
If you use gfortran + mkl can you check that you have in your make.sys 
the following


BLAS_LIB = -Wl,--no-as-needed -L${MKLROOT}/lib/intel64 -lmkl_gf_lp64 
-lmkl_core -lmkl_sequential -lpthread -lm


If this still don't work you might try to use internal blas and lapack 
as Paolo suggested.


HTH.

Nicola


On 28/10/15 10:36, Paolo Giannozzi wrote:

Please see here:
http://www.quantum-espresso.org/faq/frequent-errors-during-execution/#5.2
Since zdotc is the function that segfaults, this (from the user guide) 
could be relevant:


2.7.6.3 Linux PCs with gfortran

"There is a known incompatibility problem between the calling 
convention for Fortran functions that return complex values: there is 
the convention used by g77/f2c, where in practice the compiler 
converts such functions to subroutines with a further parameter for 
the return value; gfortran instead produces a normal function 
returning a complex value. If your system libraries were compiled 
using g77 (which may happen for system-provided libraries in 
not-too-recent Linux distributions), and you instead use gfortran to 
compile QUANTUM ESPRESSO, your code may crash or produce random 
results. This typically happens during calls to zdotc, which is one 
the most commonly used complex-returning functions of BLAS+LAPACK.


For further details see for instance this link:
http://www.macresearch.org/lapackblas-fortran-106#comment-17071
or read the man page of gfortran under the flag -ff2c.

If your code crashes during a call to zdotc, try to recompile QUANTUM 
ESPRESSO using the internal BLAS and LAPACK routines (using the 
-with-internal-blas and -with-internal-lapack parameters of the 
configure script) to see if the problem disappears; or, add the -ff2c 
flag" (info by Giovanni Pizzi, Jan. 2013).


Note that a similar problem with complex functions exists with MKL 
libraries as well: if you compile with gfortran, link -lmkl_gf_lp64, 
not -lmkl_intel_lp64, and the like for other architectures. Since 
v.5.1, you may use the following workaround: add preprocessing option 
-Dzdotc=zdotc_wrapper to DFLAGS.




On Tue, Oct 27, 2015 at 10:27 PM, Pulkit Garg <mailto:pga...@asu.edu>> wrote:


[sparky-32:92490] *** Process received signal ***
[sparky-32:92490] Signal: Segmentation fault (11)
[sparky-32:92490] Signal code:  (128)
[sparky-32:92490] Failing at address: (nil)
[sparky-32:92490] [ 0]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0) [0x2ac8b2aa7cb0]
[sparky-32:92490] [ 1]

/opt/intel/composer_xe_2013.3.163/mkl/lib/intel64/libmkl_avx.so(mkl_blas_avx_zdotc+0xe0)
[0x2ac8c4de7da0]
[sparky-32:92490] [ 2]

/opt/intel/composer_xe_2013.3.163/mkl/lib/intel64/libmkl_gf_lp64.so(zdotc_gf+0x2e)
[0x2ac8b007a56e]
[sparky-32:92490] [ 3]

/opt/intel/composer_xe_2013.3.163/mkl/lib/intel64/libmkl_gf_lp64.so(zdotc+0x26)
[0x2ac8b007a87e]
[sparky-32:92490] *** End of error message ***
--
mpirun noticed that process rank 12 with PID 92490 on node
sparky-32 exited on signal 11 (Segmentation fault).

I am able to run QE for my structure with 4 atoms and also when my
structure has 50 atoms. But when I run QE for bigger structure
(108 atoms) I am getting the above error. People have posted
similar errors but I am not sure what to do to fix this.

Pulkit Garg

___
Pw_forum mailing list
Pw_forum@pwscf.org <mailto:Pw_forum@pwscf.org>
http://pwscf.org/mailman/listinfo/pw_forum




--
Paolo Giannozzi, Dept. Chemistry&Physics&Environment,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216 , fax +39-0432-558222 




___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


--
Nicola Varini, PhD

Scientific IT and Application Support (SCITAS)
Theory and simulation of materials (THEOS)
CE 0 813 (Bâtiment CE)
Station 1
CH-1015 Lausanne
+41 21 69 31332
http://scitas.epfl.ch

Nicola Varini

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] PHonon Raman Spectra error

2015-09-30 Thread nicola varini
Ok I got what you miss: if you would like to restart with a different 
number of processor you must specify
wf_collect=.true. as the error suggest you.

  Error in routine phq_readin (1):
  pw.x run with a different number of processors. Use wf_collect=.true.


The best way to understand which options pw.x takes is to use 
http://www.quantum-espresso.org/wp-content/uploads/Doc/INPUT_PW.html
as a reference.
Here, 
http://www.quantum-espresso.org/wp-content/uploads/Doc/INPUT_PW.html#idp32096,
it explains what wf_collect does.
While 
http://www.quantum-espresso.org/wp-content/uploads/Doc/user_guide/node18.html
is a good description for the parallelization level.
Now if I take the parallelization info at the beginning of your code I see
  Parallel version (MPI), running on32 processors
  path-images division:  nimage=   4
  K-points division: npool =   4
  R & G space division:  proc/nbgrp/npool/nimage = 2




Nocp;a



On 29/09/15 17:38, Raymond Gasper wrote:
> Use wf_collect=.true.

-- 
Nicola Varini, PhD

Scientific IT and Application Support (SCITAS)
Theory and simulation of materials (THEOS)
CE 0 813 (Bâtiment CE)
Station 1
CH-1015 Lausanne
+41 21 69 31332
http://scitas.epfl.ch

Nicola Varini

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] PHonon Raman Spectra error

2015-09-29 Thread nicola varini

Dear Ray, a couple o questions:
-which mpi and compiler version are you using?
-which scheduler?
-how do you submit your calculation?
-Can you please send the output file that has been generated?

Regards,

Nicola


On 29/09/15 17:01, Raymond Gasper wrote:

Hi Pw_forum, I have a problem I haven't been able to solve:

I'm trying to the get the PHonon package to work for developing Raman 
spectra, and can't get the example to work. I've tried checking the 
archive and googling but cannot find a solution. I'm using QE version 
5.1.2, and consistently get this or a similar error on only example 5:


This job has allocated 32 nodes

/home/ray/espresso-5.1.2/PHonon/examples/example05 : starting

This example shows how to use pw.x and ph.x to calculate
the Raman tensor for AlAs.

  executables directory: /home/ray/espresso-5.1.2/bin
  pseudo directory:  /home/ray/espresso-5.1.2/pseudo
  temporary directory:   /home/ray/tmp
  checking that needed directories and files exist... done

  running pw.x as:  mpirun -v -np 32 /home/ray/espresso-5.1.2/bin/pw.x 
-nimage 4 -nk 4
  running ph.x as:  mpirun -v -np 32 /home/ray/espresso-5.1.2/bin/ph.x 
-nimage 4 -nk 4


  cleaning /home/ray/tmp... done
  running the scf calculation... done
  running the response calculation...Exit code -3 signaled from master
Killing remote processes...[17] [MPI Abort by user] Aborting Program!
[16] [MPI Abort by user] Aborting Program!
Abort signaled by rank 17: MPI_Abort() code: 1, rank 17, MPI Abort by 
user Aborting program !

[28] [MPI Abort by user] Aborting Program!
MPI process terminated unexpectedly
forrtl: error (69): process interrupted (SIGINT)
--

I've tried to tweak my environment variables, and have gotten slightly 
different errors, though all originate from mpirun. Using my current 
environment variables all pw.x examples run correctly.


This seems a very fundamental error, so I think I am missing something 
quite basic. Thanks for your time,


Ray Gasper
Computational Nanomaterials Laboratory
ELab 204
Chemical Engineering
University of Massachusetts Amherst
402-990-4900 


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


--
Nicola Varini, PhD

Scientific IT and Application Support (SCITAS)
Theory and simulation of materials (THEOS)
CE 0 813 (Bâtiment CE)
Station 1
CH-1015 Lausanne
+41 21 69 31332
http://scitas.epfl.ch

Nicola Varini

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Final iteration often crashes after structure optimization

2015-08-20 Thread nicola varini

Hi Ron,
that it would be strange although I'd not exclude it. Can you send me 
your make.sys?

In particular I did
./configure --enable-parallel --enable-openmp --with-sclapack=intel
DFLAGS =  -D__INTEL -D__DFTI -D__MPI -D__PARA -D__SCALAPACK 
-D__OPENMP $(MANUAL_DFLAGS)

IFLAGS = -I../include -I${MKLROOT}/include

MPIF90 = mpiifort
CC = icc
F77= ifort

Let me know if those settings help.

Nicola


On 20/08/2015 6:12 pm, Ronald Cohen wrote:
Thank you.  I will compare.  Do you think it is a compiler problem 
then? Thank you,


Ron

Sent from my iPad

On Aug 20, 2015, at 12:05 PM, nicola varini <mailto:nicola.var...@epfl.ch>> wrote:


Hi Ronald, I did your test with espresso-5.1.2 and I was not able to 
reproduce the bug.

I did run on 1 nodes, which has 16 cores and ~ 30GB of RAM.
I did test with
srun -n 16 pw.x < xtal.in <http://xtal.in/>
srun -n 16 pw.x -npool 4 < xtal.in <http://xtal.in/>
They both work.

https://drive.google.com/file/d/0Bw84Psle5YfwNjFYNXR3azhVRVU/view?usp=sharing

I hope this help.

Nicola
--
Nicola Varini, PhD

Scientific IT and Application Support (SCITAS)
Theory and simulation of materials (THEOS)
CE 0 813 (Bâtiment CE)
Station 1
CH-1015 Lausanne
+41 21 69 31332
http://scitas.epfl.ch

Nicola Varini



--
Nicola Varini, PhD

Scientific IT and Application Support (SCITAS)
Theory and simulation of materials (THEOS)
CE 0 813 (Bâtiment CE)
Station 1
CH-1015 Lausanne
+41 21 69 31332
http://scitas.epfl.ch

Nicola Varini

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] Final iteration often crashes after structure optimization

2015-08-20 Thread nicola varini
Hi Ronald, I did your test with espresso-5.1.2 and I was not able to 
reproduce the bug.

I did run on 1 nodes, which has 16 cores and ~ 30GB of RAM.
I did test with
srun -n 16 pw.x < xtal.in <http://xtal.in/>
srun -n 16 pw.x -npool 4 < xtal.in <http://xtal.in/>
They both work.

https://drive.google.com/file/d/0Bw84Psle5YfwNjFYNXR3azhVRVU/view?usp=sharing 
<https://drive.google.com/file/d/0Bw84Psle5YfwNjFYNXR3azhVRVU/view?usp=sharing>


I hope this help.

Nicola

--
Nicola Varini, PhD

Scientific IT and Application Support (SCITAS)
Theory and simulation of materials (THEOS)
CE 0 813 (Bâtiment CE)
Station 1
CH-1015 Lausanne
+41 21 69 31332
http://scitas.epfl.ch

Nicola Varini

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] QE trunk version has trouble compiling fftw3

2015-08-03 Thread nicola varini
Hi Samuel, I previously included the fftw3.f03 header while the fftw3.f
should work as well.
Can you try now?

Nicola


2015-08-03 15:28 GMT+02:00 Samuel Poncé :

> Dear Nicola,
>
> Thank you for your reply.
>
> The problem is that we do not have the *.f03 ffts on our cluster
> ls /share/apps/composer_xe_2013.1.117/mkl/include/fftw/
> fftw3.f  fftw3_mkl.f  fftw3_mkl.h  fftw3-mpi_mkl.h
> fftw.h   fftw_threads.h   rfftw_mpi.h
> fftw3.h  fftw3_mkl_f77.h  fftw3-mpi.h  fftw_f77.i
> fftw_mpi.h   rfftw.h  rfftw_threads.h
>
> Actually, by investigating the problematic revision I found that it was
> the revision 11615 (that you commited apparently) that was problematic.
> I have install QE r11614 and it work perfectly but version above that one
> crash with the fft problem.
>
> Best,
>
> Samuel
>
>
>
> On 3 August 2015 at 14:10, nicola varini  wrote:
>
>> Hi Samuel, we are working on fixing the configure in order to
>> automatically recognise where the
>> header should be.
>> In the meanwhile you can add to IFGLAGS = ... -I${FFTW_INC}
>> where FFTW_INC is the header directory.
>> It should look something like
>>
>> [nvarini@deneb1 BaTiO3]$ ls /ssoft/fftw/3.3.4/RH6/intel-15.0.2/include
>>
>> fftw3.f  fftw3.f03  fftw3.h  fftw3l.f03  fftw3l-mpi.f03  fftw3-mpi.f03
>> fftw3-mpi.h  fftw3q.f03
>> IHTH
>>
>> Nicola
>>
>>
>> 2015-08-03 15:01 GMT+02:00 Samuel Poncé :
>>
>>> Dear all,
>>>
>>> Today I updated my QE public trunk version (rev 11663).
>>>
>>> Since this update I have trouble compiling QE:
>>> mpif90 -O2 -assume byterecl -g -traceback -nomodule -fpp -D__INTEL
>>> -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK   -I../include -I../iotk/src
>>> -I../ELPA/src -I. -c fft_scalar.f90
>>> fft_scalar.f90(66): #error: can't find include file: fftw3.f03
>>>
>>> Everything was working with the previous version (one month old).
>>>
>>> Do I need to specify something specific ?
>>>
>>> Best,
>>>
>>> Samuel Ponce
>>>
>>>
>>> ___
>>> Pw_forum mailing list
>>> Pw_forum@pwscf.org
>>> http://pwscf.org/mailman/listinfo/pw_forum
>>>
>>
>>
>> ___
>> Pw_forum mailing list
>> Pw_forum@pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
>
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] QE trunk version has trouble compiling fftw3

2015-08-03 Thread nicola varini
Hi Samuel, you only need the header. A simple way to get it is to download
the file
http://fftw.org/fftw-3.3.4.tar.gz
Then simply by doing
./confiugre --prefix=$PWD;make ;make install
you will get the file into the directory $PWD/include

Nicola

2015-08-03 15:28 GMT+02:00 Samuel Poncé :

> Dear Nicola,
>
> Thank you for your reply.
>
> The problem is that we do not have the *.f03 ffts on our cluster
> ls /share/apps/composer_xe_2013.1.117/mkl/include/fftw/
> fftw3.f  fftw3_mkl.f  fftw3_mkl.h  fftw3-mpi_mkl.h
> fftw.h   fftw_threads.h   rfftw_mpi.h
> fftw3.h  fftw3_mkl_f77.h  fftw3-mpi.h  fftw_f77.i
> fftw_mpi.h   rfftw.h  rfftw_threads.h
>
> Actually, by investigating the problematic revision I found that it was
> the revision 11615 (that you commited apparently) that was problematic.
> I have install QE r11614 and it work perfectly but version above that one
> crash with the fft problem.
>
> Best,
>
> Samuel
>
>
>
> On 3 August 2015 at 14:10, nicola varini  wrote:
>
>> Hi Samuel, we are working on fixing the configure in order to
>> automatically recognise where the
>> header should be.
>> In the meanwhile you can add to IFGLAGS = ... -I${FFTW_INC}
>> where FFTW_INC is the header directory.
>> It should look something like
>>
>> [nvarini@deneb1 BaTiO3]$ ls /ssoft/fftw/3.3.4/RH6/intel-15.0.2/include
>>
>> fftw3.f  fftw3.f03  fftw3.h  fftw3l.f03  fftw3l-mpi.f03  fftw3-mpi.f03
>> fftw3-mpi.h  fftw3q.f03
>> IHTH
>>
>> Nicola
>>
>>
>> 2015-08-03 15:01 GMT+02:00 Samuel Poncé :
>>
>>> Dear all,
>>>
>>> Today I updated my QE public trunk version (rev 11663).
>>>
>>> Since this update I have trouble compiling QE:
>>> mpif90 -O2 -assume byterecl -g -traceback -nomodule -fpp -D__INTEL
>>> -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK   -I../include -I../iotk/src
>>> -I../ELPA/src -I. -c fft_scalar.f90
>>> fft_scalar.f90(66): #error: can't find include file: fftw3.f03
>>>
>>> Everything was working with the previous version (one month old).
>>>
>>> Do I need to specify something specific ?
>>>
>>> Best,
>>>
>>> Samuel Ponce
>>>
>>>
>>> ___
>>> Pw_forum mailing list
>>> Pw_forum@pwscf.org
>>> http://pwscf.org/mailman/listinfo/pw_forum
>>>
>>
>>
>> ___
>> Pw_forum mailing list
>> Pw_forum@pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
>
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] QE trunk version has trouble compiling fftw3

2015-08-03 Thread nicola varini
Hi Samuel, we are working on fixing the configure in order to automatically
recognise where the
header should be.
In the meanwhile you can add to IFGLAGS = ... -I${FFTW_INC}
where FFTW_INC is the header directory.
It should look something like

[nvarini@deneb1 BaTiO3]$ ls /ssoft/fftw/3.3.4/RH6/intel-15.0.2/include

fftw3.f  fftw3.f03  fftw3.h  fftw3l.f03  fftw3l-mpi.f03  fftw3-mpi.f03
fftw3-mpi.h  fftw3q.f03
IHTH

Nicola


2015-08-03 15:01 GMT+02:00 Samuel Poncé :

> Dear all,
>
> Today I updated my QE public trunk version (rev 11663).
>
> Since this update I have trouble compiling QE:
> mpif90 -O2 -assume byterecl -g -traceback -nomodule -fpp -D__INTEL
> -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK   -I../include -I../iotk/src
> -I../ELPA/src -I. -c fft_scalar.f90
> fft_scalar.f90(66): #error: can't find include file: fftw3.f03
>
> Everything was working with the previous version (one month old).
>
> Do I need to specify something specific ?
>
> Best,
>
> Samuel Ponce
>
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] error in running pw.x command

2015-07-20 Thread nicola varini
Dear all, if you use mkl you can rely on the intel linking advisor for
proper linking
https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor
If you open the file $MKL_ROOT/include/mkl.h you see the version number.
It should be something like

#define __INTEL_MKL__ 11

#define __INTEL_MKL_MINOR__ 2

#define __INTEL_MKL_UPDATE__ 2

In the link above put your version number, OS, and other options.

You should get some options in output which you should use for linking.

HTH


Nicola




2015-07-20 9:57 GMT+02:00 Bahadır salmankurt :

> Dear Mohaddeseh et co,
>
> installing one of the old version of mpi could solve the problem.
>
> 2015-07-20 10:06 GMT+03:00 Ari P Seitsonen :
>
>>
>> Dear Mohaddeseh et co,
>>
>>   Just a note: I used to have such problems when I had compiled with
>> MKL-ScaLAPACK of old version, indeed around 11.1, when I ran with more than
>> four cores. I think I managed to run when I disabled ScaLAPACK. Of course
>> this might be fully unrelated to your problem.
>>
>> Greetings from Lappeenranta,
>>
>>apsi
>>
>>
>> -=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-
>>   Ari Paavo Seitsonen / ari.p.seitso...@iki.fi / http://www.iki.fi/~apsi/
>>   Ecole Normale Supérieure (ENS), Département de Chimie, Paris
>>   Mobile (F) : +33 789 37 24 25(CH) : +41 79 71 90 935
>>
>>
>>
>> On Mon, 20 Jul 2015, Paolo Giannozzi wrote:
>>
>>  This is not a QE problem: the fortran code knows nothing about nodes and
>>> cores. It's the software setup for parallel execution on your machine that
>>> has a problem.
>>>
>>> Paolo
>>>
>>> On Thu, Jul 16, 2015 at 2:25 PM, mohaddeseh abbasnejad <
>>> m.abbasne...@gmail.com> wrote:
>>>
>>>   Dear all,
>>>
>>> I have recently installed PWscf (version 5.1) on our cluster (4 nodes,
>>> 32 cores).
>>> Ifort & mkl version 11.1 has been installed.
>>> When I run pw.x command on every node individually, for both the
>>> following command, it will work properly.
>>> 1- /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
>>> 2- mpirun -n 4 /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
>>> However, when I use the following command (again for each of them,
>>> separately),
>>> 3- mpirun -n 8 /opt/exp_soft/espresso-5.1/bin/pw.x -in scf.in
>>> it gives me such an error:
>>>
>>> [cluster:14752] *** Process received signal ***
>>> [cluster:14752] Signal: Segmentation fault (11)
>>> [cluster:14752] Signal code:  (128)
>>> [cluster:14752] Failing at address: (nil)
>>> [cluster:14752] [ 0] /lib64/libpthread.so.0() [0x3a78c0f710]
>>> [cluster:14752] [ 1]
>>> /opt/intel/Compiler/11.1/064/mkl/lib/em64t/libmkl_mc3.so(mkl_blas_zdotc+0x79)
>>> [0x2b5e8e37d4f9]
>>> [cluster:14752] *** End of error message ***
>>>
>>> --
>>> mpirun noticed that process rank 4 with PID 14752 on node
>>> cluster.khayam.local exited on signal 11 (Segmentation fault).
>>>
>>> --
>>>
>>> This error also exists when I use all the node with each other in
>>> parallel mode (using the following command):
>>> 4- mpirun -n 32 -hostfile testhost /opt/exp_soft/espresso-5.1/bin/pw.x
>>> -in scf.in
>>> The error:
>>>
>>> [cluster:14838] *** Process received signal ***
>>> [cluster:14838] Signal: Segmentation fault (11)
>>> [cluster:14838] Signal code:  (128)
>>> [cluster:14838] Failing at address: (nil)
>>> [cluster:14838] [ 0] /lib64/libpthread.so.0() [0x3a78c0f710]
>>> [cluster:14838] [ 1]
>>> /opt/intel/Compiler/11.1/064/mkl/lib/em64t/libmkl_mc3.so(mkl_blas_zdotc+0x79)
>>> [0x2b04082cf4f9]
>>> [cluster:14838] *** End of error message ***
>>>
>>> --
>>> mpirun noticed that process rank 24 with PID 14838 on node
>>> cluster.khayam.local exited on signal 11 (Segmentation fault).
>>>
>>> --
>>>
>>> Any help will be appreciated.
>>>
>>> Regards,
>>> Mohaddeseh
>>>
>>> -
>>>
>>> Mohaddeseh Abbasnejad,
>>> Room No. 323, Department of Physics,
>>> University of Tehran, North Karegar Ave.,
>>> Tehran, P.O. Box: 14395-547- IRAN
>>> Tel. No.: +98 21 6111 8634  & Fax No.: +98 21 8800 4781
>>> Cellphone: +98 917 731 7514
>>> E-Mail: m.abbasne...@gmail.com
>>> Website:  http://physics.ut.ac.ir
>>>
>>> -
>>>
>>> ___
>>> Pw_forum mailing list
>>> Pw_forum@pwscf.org
>>> http://pwscf.org/mailman/listinfo/pw_forum
>>>
>>>
>>>
>>>
>>> --
>>> Paolo Giannozzi, Dept. Chemistry&Physics&Environment,
>>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>>> Phone +39-0432-558216, fax +39-0432-558222
>>>
>>>
>> ___
>> Pw_forum mailing list
>> Pw_forum@pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_foru

Re: [Pw_forum] no scf started for huge system

2015-07-09 Thread nicola varini
Hi,  in order to run on a parallel machine please have a look at
http://www.quantum-espresso.org/wp-content/uploads/Doc/user_guide/node15.html
Whick version of mpi are you using?


2015-07-09 15:25 GMT+02:00 Ludwig, Stephan <
stephan.lud...@pi1.physik.uni-stuttgart.de>:

>  Hello,
>
> Of course you are right, but this is not the reason why the my calculation
> does not start..
>
>
> Thanks and Regards
>
>
> Stephan
>
> -Original message-
> *From:* Gabriel Greene 
> *Sent:* Thursday 9th July 2015 12:58
> *To:* PWSCF Forum 
> *Subject:* Re: [Pw_forum] no scf started for huge system
>
>
> Also, since you are using ultrasoft pseudopotentials you will need to set
> the charge density cutoff, and shouldnt rely on the default value.
>
> ecutrho should be 8-12 X ecutwfc, see the input file description
> http://www.quantum-espresso.org/wp-content/uploads/Doc/INPUT_PW.html#idp119152
>
>
> Gabriel Greene
> Tyndall National Institute
> University College Cork
> Ireland
>
>  --
> *From:* pw_forum-boun...@pwscf.org [pw_forum-boun...@pwscf.org] on behalf
> of Ludwig, Stephan [stephan.lud...@pi1.physik.uni-stuttgart.de]
> *Sent:* Thursday, July 09, 2015 11:48 AM
> *To:* PWSCF Forum
> *Subject:* Re: [Pw_forum] no scf started for huge system
>
>   Hello,
>
> I tried found out that when I increase ecutwfc to 400 I receive an error
> message:
>
>
>  Initial potential from superposition of free atoms
>
> starting charge 875.99411, renormalised to 876.0
>
>
> %%
> Error in routine diropn (3):
> wrong record length
>
> %%
>
> stopping ...
> --
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode 1.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> 
>
> 
>
> Does anybody know what this means?
>
>
>  Thanks and Regards
>
>
>  Stephan
>
>
>  -Original message-
> *From:* Ludwig, Stephan 
> *Sent:* Thursday 9th July 2015 12:42
> *To:* Forum, PWSCF (pw_forum@pwscf.org) 
> *Subject:* [Pw_forum] no scf started for huge system
>
>  Hello
>
> I try to do scf calculation for an organic salt with 236 atoms in a unit
> cell.
>
> I'm working on a cluster using 45 procs for the calculation.
>
> Within the allowed time span (2 days) these procs do not even start the
> scf cycles.
>
> The last part of the output data is:
>
>
>  Largest allocated arrays est. size (Mb) dimensions
> Kohn-Sham Wavefunctions 7508.34 Mb ( 935488, 526)
> NL pseudopotentials 20098.38 Mb ( 935488, 1408)
> Each V/rho on FFT grid 243.00 Mb (15925248)
> Each G-vector array 57.08 Mb ( 7481721)
> G-vector shells 14.43 Mb ( 1891088)
> Largest temporary arrays est. size (Mb) dimensions
> Auxiliary wavefunctions 30033.37 Mb ( 935488, 2104)
> Each subspace H/S matrix 67.55 Mb ( 2104, 2104)
> Each  matrix 11.30 Mb ( 1408, 526)
> Arrays for rho mixing 1944.00 Mb (15925248, 8)
>
> Check: negative/imaginary core charge= -0.01 0.00
>
> Initial potential from superposition of free atoms
>
> 
>
> starting charge 875.99411, renormalised to 876.0
> Starting wfc are 704 randomized atomic wfcs
>
> 
>
> Then the job cancels due to time limit.
>
> I want to be sure that this happens just because of the fact that the
> system is too huge and not because I made some bad mistake.
>
> My input file looks like this:
>
>
>  &CONTROL
> title = 'etot_vs_ecutwfc' ,
> calculation = 'scf' ,
> wf_collect = .FALSE.,
> outdir = './' ,
> wfcdir = './' ,
> pseudo_dir = '/home/st/st_st/st_phy72394/pseudo/' ,
> prefix = 'MeDH-TTPetot_vs_ecutwfc' ,
> /
> &SYSTEM
> ibrav = -12,
> A = 32.783 ,
> B = 7.995 ,
> C = 11.170 ,
> cosAB = 0 ,
> cosAC = -0.132602381688 ,
> cosBC = 0 ,
> nat = 236,
> ntyp = 5,
> ecutwfc = 200,
> occupations = 'smearing' ,
> degauss = 0.02 ,
> smearing = 'gaussian' ,
> exxdiv_treatment = 'gygi-baldereschi' ,
> /
> &ELECTRONS
> conv_thr = 1.0D-8
> /
> ATOMIC_SPECIES
> H 1.00790 H.pz-rrkjus_psl.0.1.UPF
> C 12.01100 C.pz-n-rrkjus_psl.0.1.UPF
> F 18.98800 F.pz-n-rrkjus_psl.0.1.UPF
> S 32.06500 S.pz-n-rrkjus_psl.0.1.UPF
> As 74.92200 As.pz-n-rrkjus_psl.0.2.UPF
> ATOMIC_POSITIONS angstroms
> S 5.52835 0.29521 2.16999
> S 21.91985 4.29246 2.16999
> S 4.78777 7.69929 7.70567
> S 21.17927 3.70204 7.70567
> S 8.91117 6.37745 3.91040
> S 25.30267 2.38020 3.91040
> S 8.17058 1.61705 9.44608
> S 24.56208 5.61430 9.44608
> S 8.53628 0.30838 1.68728
> S 24.92778 4.30563 1.68728
> S 7.79569 7.68612 7.22296
> S 24.18719 3.68887 7.22296
> S 5.91488 6.35256 4.41083
> S 22.30638 2.35531 4.41083
> S 5.17429 1.64194 9.94651
> S 21.56579 5.63919 9.94651
> As 31.25307 7.42312 4.40640
> As 14.86156 3.42587 4.40640
> A

Re: [Pw_forum] subroutine error.f90? or error_handler.f90? for pw2abinit

2015-05-28 Thread nicola varini
Hi Manuale, to fix the problem you should do something like
espresso="set espresso path"

ifort -I$ESPRESSO/Modules -o pw2abinit.x pw2abinit.f90
$ESPRESSO/Modules/libqemod.a $ESPRESSO/flib/ptools.a $ESPRESSO/flib/flib.a
$ESPRESSO/clib/clib.a  -Wl,--start-group
${MKLROOT}/lib/intel64/libmkl_intel_ilp64.a
${MKLROOT}/lib/intel64/libmkl_core.a
${MKLROOT}/lib/intel64/libmkl_sequential.a -Wl,--end-group -lpthread -lm

Assuming you have intel compiler and mkl installed. Otherwise you can use
lapack and gfortran.

IHTH

Nicola



2015-05-27 16:58 GMT+02:00 Manuel Pérez Jigato :

> thanks a lot Paolo
>
> I have tried to compile pw2abinit with the -I option (gfortran) pointing
> to the Modules/error_handler.o, and I get all sorts of fortran errors,
> "undefined reference to .." many things
>
> Manuel
>
>
>
>
> Dr Manuel Pérez Jigato, Chargé de Recherche
> Luxembourg Institute of Science and Technology (LIST)
> Materials Research and Technology (MRT)
> 41 rue du Brill
> L-4422 BELVAUX
> Grand-Duché de Luxembourg
> Tel (+352) 47 02 61 - 584
> Fax (+352) 47 02 64
> e-mail  manuel.pe...@list.lu
>
>
> [image: Inactive hide details for Paolo Giannozzi ---27/05/2015
> 15:41:53---Modules/error_handler.f90 should be the right place Paolo]Paolo
> Giannozzi ---27/05/2015 15:41:53---Modules/error_handler.f90 should be the
> right place Paolo
>
> From: Paolo Giannozzi 
> To: PWSCF Forum ,
> Date: 27/05/2015 15:41
> Subject: Re: [Pw_forum] subroutine error.f90? or error_handler.f90? for
> pw2abinit
> Sent by: pw_forum-boun...@pwscf.org
> --
>
>
>
> Modules/error_handler.f90 should be the right place
>
> Paolo
>
> On Wed, May 27, 2015 at 1:58 PM, Manuel Pérez Jigato <
> *manuel.pe...@list.lu* > wrote:
>
>hello
>
>I am trying to locate the right subroutine in order to compile
>pw2abinit
>The code seems to need an object file $espresso/flib/error.o, although
>version 5.1.2 of quantum-espresso does not seem to have it
>
>I did find an error_handler.f90 and error_handler.o (I have already
>compiled quantum-espresso)
>inside Modules
>
>is there anyone who could let me know the right error.o file, or else,
>where can I find it?
>
>thanks a lot
>
>Manuel
>
>Dr Manuel Pérez Jigato, Chargé de Recherche
>Luxembourg Institute of Science and Technology (LIST)
>Materials Research and Technology (MRT)
>41 rue du Brill
>L-4422 BELVAUX
>Grand-Duché de Luxembourg
>Tel *(+352) 47 02 61 - 584* <%28%2B352%29%2047%2002%2061%20-%20584>
>Fax *(+352) 47 02 64* <%28%2B352%29%2047%2002%2064>
>e-mail  *manuel.pe...@list.lu* 
>
>___
>Pw_forum mailing list
> *Pw_forum@pwscf.org* 
> *http://pwscf.org/mailman/listinfo/pw_forum*
>
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] Cena

2014-06-22 Thread nicola varini
Che fai pee cena?
On 16/06/2014 1:12 AM, "Filippo Spiga"  wrote:

> Dear Reza Behjatmanesh-Ardakani,
>
> On Jun 14, 2014, at 8:38 PM, Reza Behjatmanesh-Ardakani <
> reza_b_m_a at yahoo.com> wrote:
> > I have sent this error to the QE-GPU forum nearly 2 weeks ago; however,
> I didn't receive any reply. Maybe in QE forum one can help, so I repeat
> here:
>
> I haven't noticed the message, it is possible that my email client put it
> into SPAM automatically. Apologize.
>
>
> > I parepared an input for relaxation of carbon nanotube that
> > is in periodic form only in one-dimension with carbon
> > monoxide in it.
> > While it runs with pw.x, it encounter with error by pw-gpu.
> > Both programs are on the same machine, but in different
> > directory ( pw.x is in espresso-5.0.3, pw-gpu is in
> > espresso-503).
> > I installed the patch files on both programs. I used MPICH
> > and ifort (comoserxe 2013 update 2) for both.
> > I compiled pw-gpu.x without magma, as I saw some errors
> > such as cdiaghg error related to it.
> > I used core-i7 and GTX-660Ti, all for test. I used pw-gpu.x
> > on this system to run 1*1*3 supercell of carbon nanotube
> > without any problem.
> > The only difference of this new input with its pure form
> > comes back to the presence of CO inside it and vacuum in x
> > and y direction.
> > My GPU has only 2G and my system has 8G (the system is only
> > for test).
>
> If you run successfully one input case but another test-case does not work
> maybe the new physical system you want to simulate is too big (some data
> structure cannot fit GPU memory) or it has some unsupported features.
>
> Anyway, I do _not_ suggest to use a gaming card for QE-GPU. I understand
> how appealing is to use gaming cards to perform computation but QE-GPU is
> developed, maintained and tested for NVIDIA TESLA cards (Fermi or Kepler
> generation). It works, in principle, using gaming cards (GTX). However I do
> not guarantee performance or results reproducibility. At some point in time
> I will insert a control in the code such has if someone is trying to run
> the code on un-supported GPU, the code will terminate before start any
> calculation.
>
> Regards,
> Filippo
>
> --
> Mr. Filippo SPIGA, M.Sc.
> http://www.linkedin.com/in/filippospiga ~ skype: filippo.spiga
>
> ?Nobody will drive us out of Cantor's paradise.? ~ David Hilbert
>
> *
> Disclaimer: "Please note this message and any attachments are CONFIDENTIAL
> and may be privileged or otherwise protected from disclosure. The contents
> are not to be disclosed to anyone other than the addressee. Unauthorized
> recipients are requested to preserve this confidentiality and to advise the
> sender immediately of any error in transmission."
>
>
>
> ___
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
-- next part --
An HTML attachment was scrubbed...
URL: 
http://pwscf.org/pipermail/pw_forum/attachments/20140622/609d7ac7/attachment.html
 


[Pw_forum] band paralellization paper on Quantum-ESPRESSO

2013-03-29 Thread nicola varini
Dear all, yesterday I received the communication that our paper 
describing the recent
band parallelization algorithm has been accepted. If you are keen to 
apply for a PRACE Tier-0
or INCITE proposal it would be worth to cite it.
http://www.sciencedirect.com/science/article/pii/S0010465513001008

-- 

Dr Nicola Varini, Supercomputing specialist,
IVEC
ARRC Building, Technology Park
26 Dick Perry Avenue
Kensington WA 6151
Australia

T: +61 8 6436 8621
E: nicola at ivec.org