[Pw_forum] Fwd: Issue while executing QE-5.0 GPU

2014-10-28 Thread Nisha Agrawal
Hi Filippo,

Please help in the below mentioned issue.

"- Forwarded
From: Nisha Agrawal <itlinkstoni...@gmail.com>
Date: Tue, Oct 21, 2014 at 8:37 PM
Subject: Re: [Pw_forum] Issue while executing QE-5.0 GPU
To: PWSCF Forum <pw_forum@pwscf.org>


Hi Filippo,

As per your suggestion I ran the not-accelerated version of QE for same
input data , it worked well for me.
Following  is the contents of make.sys file, used for QE GPU compilation.
Please let me know what are the other details
required to help me in this issue.

#

MANUAL_DFLAGS  = -D__ISO_C_BINDING -D__DISABLE_CUDA_NEWD
 -D__DISABLE_CUDA_ADDUSDENS
DFLAGS =  -D__INTEL -D__FFTW3 -D__MPI -D__PARA -D__SCALAPACK
-D__CUDA -D__PHIGEMM -D__OPENMP $(MANUAL_DFLAGS)
FDFLAGS= $(DFLAGS)

# IFLAGS = how to locate directories where files to be included are
# In most cases, IFLAGS = -I../include

IFLAGS = -I../include
 -I/opt/app/espresso-5.0.2-gpu-14.03/espresso-5.0.2/GPU/..//phiGEMM/include
-I/opt/CUDA-5.5/include

# MOD_FLAGS = flag used by f90 compiler to locate modules
# Each Makefile defines the list of needed modules in MODFLAGS

MOD_FLAG  = -I

# Compilers: fortran-90, fortran-77, C
# If a parallel compilation is desired, MPIF90 should be a fortran-90
# compiler that produces executables for parallel execution using MPI
# (such as for instance mpif90, mpf90, mpxlf90,...);
# otherwise, an ordinary fortran-90 compiler (f90, g95, xlf90, ifort,...)
# If you have a parallel machine but no suitable candidate for MPIF90,
# try to specify the directory containing "mpif.h" in IFLAGS
# and to specify the location of MPI libraries in MPI_LIBS

MPIF90 = mpiifort
#F90   = ifort
CC = mpiicc
F77= mpiifort

# C preprocessor and preprocessing flags - for explicit preprocessing,
# if needed (see the compilation rules above)
# preprocessing flags must include DFLAGS and IFLAGS

CPP= cpp
CPPFLAGS   = -P -traditional $(DFLAGS) $(IFLAGS)

# compiler flags: C, F90, F77
# C flags must include DFLAGS and IFLAGS
# F90 flags must include MODFLAGS, IFLAGS, and FDFLAGS with appropriate
syntax

CFLAGS = -DMKL_ILP64 -O3 $(DFLAGS) $(IFLAGS)
F90FLAGS   = $(FFLAGS) -nomodule -fpp $(FDFLAGS) $(IFLAGS) $(MODFLAGS)
FFLAGS = -i8 -O2 -assume byterecl -g -traceback -par-report0
-vec-report0

# compiler flags without optimization for fortran-77
# the latter is NEEDED to properly compile dlamch.f, used by lapack

FFLAGS_NOOPT   = -i8 -O0 -assume byterecl -g -traceback

# compiler flag needed by some compilers when the main is not fortran
# Currently used for Yambo

FFLAGS_NOMAIN   = -nofor_main

# Linker, linker-specific flags (if any)
# Typically LD coincides with F90 or MPIF90, LD_LIBS is empty

LD = mpiifort
LDFLAGS=  -ilp64
LD_LIBS= -L/opt/CUDA-5.5/lib64 -lcublas  -lcufft -lcudart

# External Libraries (if any) : blas, lapack, fft, MPI

# If you have nothing better, use the local copy :
# BLAS_LIBS = /your/path/to/espresso/BLAS/blas.a
# BLAS_LIBS_SWITCH = internal

BLAS_LIBS  =
/opt/app/espresso-5.0.2-gpu-14.03/espresso-5.0.2/GPU/..//phiGEMM/lib/libphigemm.a
 -L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64 -lmkl_intel_ilp64
-lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm
BLAS_LIBS_SWITCH = external

# If you have nothing better, use the local copy :
# LAPACK_LIBS = /your/path/to/espresso/lapack-3.2/lapack.a
# LAPACK_LIBS_SWITCH = internal
# For IBM machines with essl (-D__ESSL): load essl BEFORE lapack !
# remember that LAPACK_LIBS precedes BLAS_LIBS in loading order

# CBLAS is used in case the C interface for BLAS is missing (i.e. ACML)
CBLAS_ENABLED = 0

LAPACK_LIBS=
LAPACK_LIBS_SWITCH = external

ELPA_LIBS_SWITCH = disabled
SCALAPACK_LIBS = -lmkl_scalapack_ilp64 -lmkl_blacs_intelmpi_ilp64

# nothing needed here if the the internal copy of FFTW is compiled
# (needs -D__FFTW in DFLAGS)

FFT_LIBS   = -L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64

# For parallel execution, the correct path to MPI libraries must
# be specified in MPI_LIBS (except for IBM if you use mpxlf)

MPI_LIBS   =

# IBM-specific: MASS libraries, if available and if -D__MASS is defined in
FDFLAGS

MASS_LIBS  =

# ar command and flags - for most architectures: AR = ar, ARFLAGS = ruv

AR = ar
ARFLAGS= ruv

# ranlib command. If ranlib is not needed (it isn't in most cases) use
# RANLIB = echo

RANLIB = ranlib

# all internal and external libraries - do not modify

FLIB_TARGETS   = all

# CUDA section
NVCC = /opt/CUDA-5.5/bin/nvcc
NVCCFLAGS= -O3  -gencode arch=compute_20,code=sm_20 -gencode
arch=compute_20,code=sm_21

PHIGEMM_INTERNAL = 1
PHIGEMM_SYMBOLS  = 1
MAGMA_INTERNAL   = 0

LIBOBJS= ../flib/ptools.a ../flib/flib.a ../clib/clib.a
../iotk/src/libiotk.a
LIBS   = $(SC

Re: [Pw_forum] Issue while executing QE-5.0 GPU

2014-10-21 Thread Nisha Agrawal
uot;Apologizing does not mean that you are wrong and the other one is right...
It simply means that you value the relationship much more than your ego.."

On Tue, Oct 21, 2014 at 2:32 AM, Filippo Spiga <spiga.fili...@gmail.com>
wrote:

> Dear Nisha,
>
> the error as it is reported in your email does not give much details
> honestly. Make sure --with-gpu-arch=sm_20 for your GPU.
>
> If it runs properly for small system on your machine but it dies for big
> systems then check the normal not-accelerated version of QE can run. If it
> runs and the problem appears only when GPU is turned on then we can try to
> investigate further.
>
> HTH
> F
>
> On Oct 17, 2014, at 5:27 AM, Nisha Agrawal <itlinkstoni...@gmail.com>
> wrote
>
> > Hi,
> >
> >
> > I installed Quantam Espresso GPU v14.03.0, Intel compilers 13.0 and
> Intel MKL 11.0. We have NVIDIA GPU M2090 cards. The issue which I am facing
> is, for small input data it runs well, while
> > for big input data it got terminated with the following error.  Did I
> missed any compilation flag?
> > Does the  Quantam Espresso GPU v14.03.0 works well with INtel compiler.
> Please help
> >
> > forrtl: severe (174): SIGSEGV, segmentation fault occurred
> > Image  PCRoutineLine
> Source
> > libmkl_avx.so  2AB729DF919A  Unknown   Unknown
> Unknown
> > forrtl: severe (174): SIGSEGV, segmentation fault occurred
> > Image  PCRoutineLine
> Source
> > libmkl_avx.so  2B3B05DF919A  Unknown   Unknown
> Unknown
> > forrtl: severe (174): SIGSEGV, segmentation fault occurred
> > Image  PCRoutineLine
> Source
> > libmkl_avx.so  2B5549DF919A  Unknown   Unknown
> Unknown
> >
> > ___
> > Pw_forum mailing list
> > Pw_forum@pwscf.org
> > http://pwscf.org/mailman/listinfo/pw_forum
>
> --
> Mr. Filippo SPIGA, M.Sc.
> http://filippospiga.info ~ skype: filippo.spiga
>
> «Nobody will drive us out of Cantor's paradise.» ~ David Hilbert
>
> *
> Disclaimer: "Please note this message and any attachments are CONFIDENTIAL
> and may be privileged or otherwise protected from disclosure. The contents
> are not to be disclosed to anyone other than the addressee. Unauthorized
> recipients are requested to preserve this confidentiality and to advise the
> sender immediately of any error in transmission."
>
>
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] Issue while executing QE-5.0 GPU

2014-10-16 Thread Nisha Agrawal
Hi,


I installed Quantam Espresso GPU v14.03.0, Intel compilers 13.0 and Intel
MKL 11.0. We have NVIDIA GPU M2090 cards. The issue which I am facing is,
for small input data it runs well, while
for big input data it got terminated with the following error.  Did I
missed any compilation flag?
Does the  Quantam Espresso GPU v14.03.0 works well with INtel compiler.
Please help

forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image  PCRoutineLineSource

libmkl_avx.so  2AB729DF919A  Unknown   Unknown  Unknown
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image  PCRoutineLineSource

libmkl_avx.so  2B3B05DF919A  Unknown   Unknown  Unknown
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image  PCRoutineLineSource

libmkl_avx.so  2B5549DF919A  Unknown   Unknown  Unknown
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] QE on Xeon Phi : Execution Issue

2014-07-24 Thread Nisha Agrawal
Dear Fabio Affinito

Thank you so much for information.

"Apologizing does not mean that you are wrong and the other one is right...
It simply means that you value the relationship much more than your ego.."


On Wed, Jul 23, 2014 at 5:32 PM, Nisha Agrawal 
wrote:

> Hi,
>
> I setup the quantum espresso Intel Xeon Phi version using the instruction
> provided in the following link
>
>
> https://software.intel.com/en-us/articles/quantum-espresso-for-intel-xeon-phi-coprocessor
>
>
> However when I was running, its not getting offloaded to Intel Xeon PHI ,
> following is the script I am using
> to run QE MIC version. Please let me know If I missed something which is
> required to set or doing somthing
> wrong.
>
>
> ---
> source /home/opt/ICS-2013.1.039-intel64/bin/compilervars.sh intel64
> source /home/opt/ICS-2013.1.039-intel64/mkl/bin/mklvars.sh intel64
> source /home/opt/ICS-2013.1.039-intel64/impi/4.1.2.040/bin64/mpivars.sh
>
> export MKL_MIC_ENABLE=1
> export MKL_DYNAMIC=false
> export MKL_MIC_DISABLE_HOST_FALLBACK=1
> export MIC_LD_LIBRARY_PATH=$MKLROOT/lib/mic:$MIC_LD_LIBRARY_PATH
>
> export OFFLOAD_DEVICES=0
>
> export I_MPI_FALLBACK_DEVICE=disable
> export I_MPI_PIN=disable
> export I_MPI_DEBUG=5
>
>
> export MKL_MIC_ZGEMM_AA_M_MIN=500
> export MKL_MIC_ZGEMM_AA_N_MIN=500
> export MKL_MIC_ZGEMM_AA_K_MIN=500
> export MKL_MIC_THRESHOLDS_ZGEMM=500,500,500
>
>
> export OFFLOAD_REPORT=2
> mpirun  -np 8 -perhost 4  ./espresso-5.0.2/bin/pw.x   -in  ./BN.in 2>&1 |
> tee test.log
>
> -
>
> ---
>
>
> "Apologizing does not mean that you are wrong and the other one is right...
> It simply means that you value the relationship much more than your ego.."
>
>
> On Mon, Jul 14, 2014 at 8:16 PM, Axel Kohlmeyer 
> wrote:
>
>> On Mon, Jul 14, 2014 at 9:34 AM, Eduardo Menendez 
>> wrote:
>> > Thank you Axel. Your advise rises another doubt. Can we get the maximum
>> > performance from a highly clocked CPU?
>> > I used to consider that the the fastest CPUs were too fast for the
>> memory
>> > access, resulting in bottlenecks. Of couse it depends on cache size.
>>
>> your concern is justified, but the situation is more complex these
>> days. highly clocked CPUs have less cores and thus receive a larger
>> share of the available memory bandwidth and the highest clocked
>> inter-CPU and memory bus is only available for a subset of the CPUs.
>> now you have an optimization problem that has to consider the strong
>> scaling (or lack thereof) of the code in question as an additional
>> input parameter.
>>
>> to give an example: we purchased at the same time dual socket nodes
>> that had the same mainboard, but either 2x 3.5GHz quad-core or 2x
>> 2.8GHz hex-core. the 3.5GHz was the fastest clock available at the
>> time. for classical MD, i get better performance out of the 12-core
>> nodes, for plane-wave DFT i get about the same performance out of
>> both, for CP2k i get better performance with the 8-core (in fact, CP2k
>> runs fastest on the 12-core with using only 8 cores). now, the cost of
>> the 2.8GHz CPUs is significantly lower, so that is why we procured the
>> majority of the cluster with those. but we do have applications that
>> scale less than CP2k or are serial, but require high per-core memory
>> bandwidth, so we got a few of the 3.5GHz ones, too (and since they are
>> already expensive we filled them with RAM as much as it doesn't result
>> in underclocking of the memory bus; and in turn we put "only" 1GB/core
>> into the 12-core nodes).
>>
>> so it all boils down to finding the right balance and adjusting it to
>> the application mix that you are running. last time i checked the
>> intel spec sheets, it looked as if the best deal was to be had for
>> CPUs with the second largest number of CPU cores and as high a clock
>> as required to have the full memory bus speed. that will also keep the
>> heat in check, as the highest clocked CPUs usually have a much higher
>> TDP (>50% more) and that is just a much larger demand on cooling and
>> power and will incur additional indirect costs as well.
>>
>> HTH,
>> axel.
>>
>>
>> >
>> >>Stick with the cpu. For QE you should be best off with intel. Also you
>> are
>> >&

[Pw_forum] QE on Xeon Phi : Execution Issue

2014-07-23 Thread Nisha Agrawal
Hi,

I setup the quantum espresso Intel Xeon Phi version using the instruction
provided in the following link

https://software.intel.com/en-us/articles/quantum-espresso-for-intel-xeon-phi-coprocessor


However when I was running, its not getting offloaded to Intel Xeon PHI ,
following is the script I am using
to run QE MIC version. Please let me know If I missed something which is
required to set or doing somthing
wrong.

---
source /home/opt/ICS-2013.1.039-intel64/bin/compilervars.sh intel64
source /home/opt/ICS-2013.1.039-intel64/mkl/bin/mklvars.sh intel64
source /home/opt/ICS-2013.1.039-intel64/impi/4.1.2.040/bin64/mpivars.sh

export MKL_MIC_ENABLE=1
export MKL_DYNAMIC=false
export MKL_MIC_DISABLE_HOST_FALLBACK=1
export MIC_LD_LIBRARY_PATH=$MKLROOT/lib/mic:$MIC_LD_LIBRARY_PATH

export OFFLOAD_DEVICES=0

export I_MPI_FALLBACK_DEVICE=disable
export I_MPI_PIN=disable
export I_MPI_DEBUG=5


export MKL_MIC_ZGEMM_AA_M_MIN=500
export MKL_MIC_ZGEMM_AA_N_MIN=500
export MKL_MIC_ZGEMM_AA_K_MIN=500
export MKL_MIC_THRESHOLDS_ZGEMM=500,500,500


export OFFLOAD_REPORT=2
mpirun  -np 8 -perhost 4  ./espresso-5.0.2/bin/pw.x   -in  ./BN.in 2>&1 |
tee test.log

-
---


"Apologizing does not mean that you are wrong and the other one is right...
It simply means that you value the relationship much more than your ego.."


On Mon, Jul 14, 2014 at 8:16 PM, Axel Kohlmeyer  wrote:

> On Mon, Jul 14, 2014 at 9:34 AM, Eduardo Menendez 
> wrote:
> > Thank you Axel. Your advise rises another doubt. Can we get the maximum
> > performance from a highly clocked CPU?
> > I used to consider that the the fastest CPUs were too fast for the memory
> > access, resulting in bottlenecks. Of couse it depends on cache size.
>
> your concern is justified, but the situation is more complex these
> days. highly clocked CPUs have less cores and thus receive a larger
> share of the available memory bandwidth and the highest clocked
> inter-CPU and memory bus is only available for a subset of the CPUs.
> now you have an optimization problem that has to consider the strong
> scaling (or lack thereof) of the code in question as an additional
> input parameter.
>
> to give an example: we purchased at the same time dual socket nodes
> that had the same mainboard, but either 2x 3.5GHz quad-core or 2x
> 2.8GHz hex-core. the 3.5GHz was the fastest clock available at the
> time. for classical MD, i get better performance out of the 12-core
> nodes, for plane-wave DFT i get about the same performance out of
> both, for CP2k i get better performance with the 8-core (in fact, CP2k
> runs fastest on the 12-core with using only 8 cores). now, the cost of
> the 2.8GHz CPUs is significantly lower, so that is why we procured the
> majority of the cluster with those. but we do have applications that
> scale less than CP2k or are serial, but require high per-core memory
> bandwidth, so we got a few of the 3.5GHz ones, too (and since they are
> already expensive we filled them with RAM as much as it doesn't result
> in underclocking of the memory bus; and in turn we put "only" 1GB/core
> into the 12-core nodes).
>
> so it all boils down to finding the right balance and adjusting it to
> the application mix that you are running. last time i checked the
> intel spec sheets, it looked as if the best deal was to be had for
> CPUs with the second largest number of CPU cores and as high a clock
> as required to have the full memory bus speed. that will also keep the
> heat in check, as the highest clocked CPUs usually have a much higher
> TDP (>50% more) and that is just a much larger demand on cooling and
> power and will incur additional indirect costs as well.
>
> HTH,
> axel.
>
>
> >
> >>Stick with the cpu. For QE you should be best off with intel. Also you
> are
> >> likely to >get the best price/performance ratio with CPUs that have less
> >> than the maximum >number of cpu cores and a higher clock instead.
> >
> >
> > Eduardo Menendez Proupin
> > Departamento de Fisica, Facultad de Ciencias, Universidad de Chile
> > URL: http://www.gnm.cl/emenendez
> >
> > "Science may be described as the art of systematic oversimplification."
> Karl
> > Popper
> >
> >
> > ___
> > Pw_forum mailing list
> > Pw_forum at pwscf.org
> > http://pwscf.org/mailman/listinfo/pw_forum
>
>
>
> --
> Dr. Axel Kohlmeyer  akohlmey at gmail.com  http://goo.gl/1wk0
> College of Science & Technology, Temple University, Philadelphia PA, USA
> International Centre for Theoretical Physics, Trieste. Italy.
>
> ___
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
-- next 

[Pw_forum] Error in Execution while linked with Intel MKL - 11.0 64-bit : QE-5.0.2

2013-03-07 Thread Nisha Agrawal
Hi Dr. Lorenzo Paulatto

Thank you soo much . My Issue is resolved.
But what this flag does?

Thanks again 

"Apologizing does not mean that you are wrong and the other one is right...
It simply means that you value the relationship much more than your ego.."


On Thu, Mar 7, 2013 at 2:52 PM, Lorenzo Paulatto
 wrote:
> Try to set
> MANUAL_DFLAGS = -D__ISO_C_BINDING
> in make.sys, than make clean and recompile.
>
> cheers
>
> --
> Dr. Lorenzo Paulatto
> IdR @ IMPMC -- CNRS & Universit? Paris 6
> phone: +33 (0)1 44275 084 / skype: paulatz
> www:   http://www-int.impmc.upmc.fr/~paulatto/
> mail:  23-24/4?16 Bo?te courrier 115, 4 place Jussieu 75252 Paris C?dex 05
>
> ___
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum



[Pw_forum] Error in Execution while linked with Intel MKL - 11.0 64-bit : QE-5.0.2

2013-03-07 Thread Nisha Agrawal
Hi Paolo

It works fine for gcc.

"Apologizing does not mean that you are wrong and the other one is right...
It simply means that you value the relationship much more than your ego.."


On Thu, Mar 7, 2013 at 1:02 AM, Paolo Giannozzi  
wrote:
> On Wednesday 06 March 2013 06:48, Nisha Agrawal wrote:
>
>> [eval_infix.c] A parsing error occurred
>> helper string:
>> -
>> error code:
>> Error: missing operand
>
>> Kindly help to resolve this issue.
>
> please help to resolve this issue:
> - verify if you have any strange character (MS-DOS and the like)
>   in your file
> - try gcc instead of Intel icc
> - look into clib/eval-infix.c and try to figure out where it stops and why
> On two (very) different executables, your input works for me
>
> Paolo
> --
> Paolo Giannozzi, IOM-Democritos and DCFA, Univ. Udine, Italy
>
> ___
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum


[Pw_forum] Error in Execution while linked with Intel MKL - 11.0 64-bit : QE-5.0.2

2013-03-07 Thread Nisha Agrawal
Hi Paolo

It is working fine if I'll use intel MKL lp64 interface . But
If I use intel MKL ilp64 interface it throw that parsing err.



"Apologizing does not mean that you are wrong and the other one is right...
It simply means that you value the relationship much more than your ego.."


On Thu, Mar 7, 2013 at 1:02 AM, Paolo Giannozzi  
wrote:
> On Wednesday 06 March 2013 06:48, Nisha Agrawal wrote:
>
>> [eval_infix.c] A parsing error occurred
>> helper string:
>> -
>> error code:
>> Error: missing operand
>
>> Kindly help to resolve this issue.
>
> please help to resolve this issue:
> - verify if you have any strange character (MS-DOS and the like)
>   in your file
> - try gcc instead of Intel icc
> - look into clib/eval-infix.c and try to figure out where it stops and why
> On two (very) different executables, your input works for me
>
> Paolo
> --
> Paolo Giannozzi, IOM-Democritos and DCFA, Univ. Udine, Italy
>
> ___
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum


[Pw_forum] Error in Execution while linked with Intel MKL - 11.0 64-bit : QE-5.0.2

2013-03-06 Thread Nisha Agrawal
Hi Mohnish

Thanks for your reply . I am still getting the same issue
after trying with nosym=.true. . I am attaching the modified
input file .

Just one query have you used the same compiler version
and libraries for configuring espresso-5.0.2 which I have been
used.

With Thanks & Regards
Nisha

"Apologizing does not mean that you are wrong and the other one is right...
It simply means that you value the relationship much more than your ego.."


On Wed, Mar 6, 2013 at 1:03 PM, mohnish pandey  
wrote:
> It seems you are messing up with the symmetry by the way you are defining
> the cell. I ran your input file wit nosym = .true and it works. Though your
> original input file is giving me a symmetry error, so I tried without
> symmetry, may be you can do the same.
>
> On Wed, Mar 6, 2013 at 6:48 AM, Nisha Agrawal 
> wrote:
>>
>> Hi
>>
>> I am using the below configuration while installing the Parallel
>> version of Quantum Espresso-5.0.2
>>
>> 1. Intel MPI
>> 2. Intel compose xe_2013 ( Intel MKL-11.0, ifort 13.0, Intel MKL LAPACK &
>> FFTW3)
>>
>> I) Below are the steps I am using to configure and install quantum
>> espresso-5.0.2 ( attached make.sys file)
>>
>> Step 1: below is the configure command
>>
>> ./configure --prefix=~/espresso_5.0.2 MPIF90=mpiifort FC=ifort
>> F77=ifort F90=ifort CXX=icpc CC=icc
>> BLAS_LIBS="-L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64
>> -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread -lm"
>> LAPACK_LIBS="-L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64/"
>>
>> FFT_LIBS="/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64/libfftw3x_cdft_ilp64.a"
>> SCALAPACK_LIBS="-L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64/
>> -lmkl_scalapack_ilp64 -lmkl_blacs_intelmpi_ilp64"
>>
>> Step 2: make all
>>
>> II) Command for execution
>>
>> mpirun -np 2 ./bin/pw.x -in BN.in
>>
>>  III) While running I am getting the following error message (
>> attached input file)
>>
>>
>>
>> ---
>>  Program PWSCF v.5.0.2 (svn rev. 9392) starts on  4Mar2013 at 16:50:15
>>
>>  This program is part of the open-source Quantum ESPRESSO suite
>>  for quantum simulation of materials; please cite
>>  "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
>>   URL http://www.quantum-espresso.org;,
>>  in publications or presentations arising from this work. More details
>> at
>>  http://www.quantum-espresso.org/quote.php
>>
>>  Parallel version (MPI), running on 2 processors
>>  R & G space division:  proc/nbgrp/npool/nimage =   2
>>
>>  Current dimensions of program PWSCF are:
>>  Max number of different atomic species (ntypx) = 10
>>  Max number of k-points (npk) =  4
>>  Max angular momentum in pseudopotentials (lmaxx) =  3
>>  Reading input from ./only-BN.in
>> Warning: card  ignored
>> Warning: card / ignored
>> [eval_infix.c] A parsing error occurred
>> helper string:
>> -
>> error code:
>> Error: missing operand
>>
>> [eval_infix.c] A parsing error occurred
>> helper string:
>> -
>>
>>  %%
>>  Error in routine card_atomic_positions (1):
>>  Error while parsing atomic position card.
>> error code:
>> Error: missing operand
>>
>>  
>>
>>  stopping ...
>>
>>  %
>>  Error in routine card_atomic_positions (1):
>>  Error while parsing atomic position card.
>>  
>>
>>  stopping ...
>>
>> 
>>
>>
>> Kindly help to resolve this issue.
>>
>>
>>
>> With Thanks & Regards
>> Nis
>>
>>
>>
>>
>> "Apologizing does not mean that you are wrong and the other one is
>> right...
>> It simply means that you value the relationship much more than your ego.."
>>
>> ___
>> Pw_forum mailing list
>> Pw_forum at pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>
>
>
>
> --
> Regards,
> MOHNISH,
> -
> Mohnish Pandey,
> PhD Student,
> Center for Atomic Scale Materials Design,
> Department of Physics,
> Technical University of Denmark
> -
>
> ___
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
-- next part --
A non-text attachment was scrubbed...
Name: only-BN.in
Type: application/octet-stream
Size: 883 bytes
Desc: not available
Url : 
http://pwscf.org/pipermail/pw_forum/attachments/20130306/1f500f75/attachment.obj
 


[Pw_forum] Error in Execution while linked with Intel MKL - 11.0 64-bit : QE-5.0.2

2013-03-06 Thread Nisha Agrawal
Hi Prasenjit..

Thanks . I have been checked but its not the issues.




"Apologizing does not mean that you are wrong and the other one is right...
It simply means that you value the relationship much more than your ego.."


On Wed, Mar 6, 2013 at 1:29 PM, Prasenjit Ghosh  
wrote:
> Dear Nisha,
>
> The following may be the case: I am not sure but you can have a look.
> http://www.democritos.it/pipermail/pw_forum/2009-October/014548.html
>
> In 5.0.2, for hexagonal unit cell, the a vector should be aligned parallel
> to the x-axis.
>
> Prasenjit
>
>
> On 6 March 2013 13:03, mohnish pandey  wrote:
>>
>> It seems you are messing up with the symmetry by the way you are defining
>> the cell. I ran your input file wit nosym = .true and it works. Though your
>> original input file is giving me a symmetry error, so I tried without
>> symmetry, may be you can do the same.
>>
>> On Wed, Mar 6, 2013 at 6:48 AM, Nisha Agrawal 
>> wrote:
>>>
>>> Hi
>>>
>>> I am using the below configuration while installing the Parallel
>>> version of Quantum Espresso-5.0.2
>>>
>>> 1. Intel MPI
>>> 2. Intel compose xe_2013 ( Intel MKL-11.0, ifort 13.0, Intel MKL LAPACK &
>>> FFTW3)
>>>
>>> I) Below are the steps I am using to configure and install quantum
>>> espresso-5.0.2 ( attached make.sys file)
>>>
>>> Step 1: below is the configure command
>>>
>>> ./configure --prefix=~/espresso_5.0.2 MPIF90=mpiifort FC=ifort
>>> F77=ifort F90=ifort CXX=icpc CC=icc
>>> BLAS_LIBS="-L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64
>>> -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread -lm"
>>> LAPACK_LIBS="-L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64/"
>>>
>>> FFT_LIBS="/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64/libfftw3x_cdft_ilp64.a"
>>> SCALAPACK_LIBS="-L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64/
>>> -lmkl_scalapack_ilp64 -lmkl_blacs_intelmpi_ilp64"
>>>
>>> Step 2: make all
>>>
>>> II) Command for execution
>>>
>>> mpirun -np 2 ./bin/pw.x -in BN.in
>>>
>>>  III) While running I am getting the following error message (
>>> attached input file)
>>>
>>>
>>>
>>> ---
>>>  Program PWSCF v.5.0.2 (svn rev. 9392) starts on  4Mar2013 at 16:50:15
>>>
>>>  This program is part of the open-source Quantum ESPRESSO suite
>>>  for quantum simulation of materials; please cite
>>>  "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
>>>   URL http://www.quantum-espresso.org;,
>>>  in publications or presentations arising from this work. More
>>> details at
>>>  http://www.quantum-espresso.org/quote.php
>>>
>>>  Parallel version (MPI), running on 2 processors
>>>  R & G space division:  proc/nbgrp/npool/nimage =   2
>>>
>>>  Current dimensions of program PWSCF are:
>>>  Max number of different atomic species (ntypx) = 10
>>>  Max number of k-points (npk) =  4
>>>  Max angular momentum in pseudopotentials (lmaxx) =  3
>>>  Reading input from ./only-BN.in
>>> Warning: card  ignored
>>> Warning: card / ignored
>>> [eval_infix.c] A parsing error occurred
>>> helper string:
>>> -
>>> error code:
>>> Error: missing operand
>>>
>>> [eval_infix.c] A parsing error occurred
>>> helper string:
>>> -
>>>
>>>  %%
>>>  Error in routine card_atomic_positions (1):
>>>  Error while parsing atomic position card.
>>> error code:
>>> Error: missing operand
>>>
>>>  
>>>
>>>  stopping ...
>>>
>>>  %
>>>  Error in routine card_atomic_positions (1):
>>>  Error while parsing atomic position card.
>>>  
>>>
>>>  stopping ...
>>>
>>> 
>>>
>>>
>>> Kindly help to resolve this issue.
>>>
>>>
>>>
>>> With Thanks & Regards
>

[Pw_forum] Still Error in Execution while linked with Intel MKL - 11.0 64-bit : QE-5.0.2

2013-03-06 Thread Nisha Agrawal
Hi Mohnish

Thanks for your reply . I am still getting the same issue
after trying with nosym=.true. . I am attaching the modified
input file .

Just one query have you used the same compiler version
and libraries for configuring espresso-5.0.2 which I have been
used.

With Thanks & Regards
Nisha


"Apologizing does not mean that you are wrong and the other one is right...
It simply means that you value the relationship much more than your ego.."
-- next part --
A non-text attachment was scrubbed...
Name: only-BN.in
Type: application/octet-stream
Size: 883 bytes
Desc: not available
Url : 
http://pwscf.org/pipermail/pw_forum/attachments/20130306/2c341d58/attachment.obj
 


[Pw_forum] Error in Execution while linked with Intel MKL - 11.0 64-bit : QE-5.0.2

2013-03-06 Thread Nisha Agrawal
Hi

I am using the below configuration while installing the Parallel
version of Quantum Espresso-5.0.2

1. Intel MPI
2. Intel compose xe_2013 ( Intel MKL-11.0, ifort 13.0, Intel MKL LAPACK & FFTW3)

I) Below are the steps I am using to configure and install quantum
espresso-5.0.2 ( attached make.sys file)

Step 1: below is the configure command

./configure --prefix=~/espresso_5.0.2 MPIF90=mpiifort FC=ifort
F77=ifort F90=ifort CXX=icpc CC=icc
BLAS_LIBS="-L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64
-lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread -lm"
LAPACK_LIBS="-L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64/"
FFT_LIBS="/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64/libfftw3x_cdft_ilp64.a"
SCALAPACK_LIBS="-L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64/
-lmkl_scalapack_ilp64 -lmkl_blacs_intelmpi_ilp64"

Step 2: make all

II) Command for execution

mpirun -np 2 ./bin/pw.x -in BN.in

 III) While running I am getting the following error message (
attached input file)


---
 Program PWSCF v.5.0.2 (svn rev. 9392) starts on  4Mar2013 at 16:50:15

 This program is part of the open-source Quantum ESPRESSO suite
 for quantum simulation of materials; please cite
 "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
  URL http://www.quantum-espresso.org;,
 in publications or presentations arising from this work. More details at
 http://www.quantum-espresso.org/quote.php

 Parallel version (MPI), running on 2 processors
 R & G space division:  proc/nbgrp/npool/nimage =   2

 Current dimensions of program PWSCF are:
 Max number of different atomic species (ntypx) = 10
 Max number of k-points (npk) =  4
 Max angular momentum in pseudopotentials (lmaxx) =  3
 Reading input from ./only-BN.in
Warning: card  ignored
Warning: card / ignored
[eval_infix.c] A parsing error occurred
helper string:
-
error code:
Error: missing operand

[eval_infix.c] A parsing error occurred
helper string:
-

 %%
 Error in routine card_atomic_positions (1):
 Error while parsing atomic position card.
error code:
Error: missing operand

 

 stopping ...

 %
 Error in routine card_atomic_positions (1):
 Error while parsing atomic position card.
 

 stopping ...



Kindly help to resolve this issue.



With Thanks & Regards
Nis




"Apologizing does not mean that you are wrong and the other one is right...
It simply means that you value the relationship much more than your ego.."
-- next part --
# make.sys.  Generated from make.sys.in by configure.

# compilation rules

.SUFFIXES :
.SUFFIXES : .o .c .f .f90

# most fortran compilers can directly preprocess c-like directives: use
#   $(MPIF90) $(F90FLAGS) -c $<
# if explicit preprocessing by the C preprocessor is needed, use:
#   $(CPP) $(CPPFLAGS) $< -o $*.F90 
#   $(MPIF90) $(F90FLAGS) -c $*.F90 -o $*.o
# remember the tabulator in the first column !!!

.f90.o:
$(MPIF90) $(F90FLAGS) -c $<

# .f.o and .c.o: do not modify

.f.o:
$(F77) $(FFLAGS) -c $<

.c.o:
$(CC) $(CFLAGS)  -c $<


# DFLAGS  = precompilation options (possible arguments to -D and -U)
#   used by the C compiler and preprocessor
# FDFLAGS = as DFLAGS, for the f90 compiler
# See include/defs.h.README for a list of options and their meaning
# With the exception of IBM xlf, FDFLAGS = $(DFLAGS)
# For IBM xlf, FDFLAGS is the same as DFLAGS with separating commas 

MANUAL_DFLAGS  =
DFLAGS =  -D__INTEL -D__FFTW -D__MPI -D__PARA -D__SCALAPACK 
$(MANUAL_DFLAGS)
FDFLAGS= $(DFLAGS)

# IFLAGS = how to locate directories where files to be included are
# In most cases, IFLAGS = -I../include

IFLAGS = -I../include

# MOD_FLAGS = flag used by f90 compiler to locate modules
# Each Makefile defines the list of needed modules in MODFLAGS

MOD_FLAG  = -I

# Compilers: fortran-90, fortran-77, C
# If a parallel compilation is desired, MPIF90 should be a fortran-90 
# compiler that produces executables for parallel execution using MPI
# (such as for instance mpif90, mpf90, mpxlf90,...);
# otherwise, an ordinary fortran-90 compiler (f90, g95, xlf90, ifort,...)
# If you have a parallel machine but no suitable candidate for MPIF90,
# try to specify the directory containing "mpif.h" in IFLAGS
# and to specify the location of MPI libraries in MPI_LIBS

MPIF90 = mpiifort
#F90   = ifort
CC = icc
F77= ifort

# C preprocessor and preprocessing flags - for explicit preprocessing, 
# if needed (see the compilation rules above)
# preprocessing