i have set variable LD_FLAGS = static in make.sys file and on $make all
i get error ---------------------------------------- libiotk.a mpif90 -O3 -x f95-cpp-input -D__GFORTRAN -D__FFTW -D__USE_INTERNAL_FFTW -D__MPI -D__PARA -I../include -I./ -I../Modules -I../iotk/src -I../PW -I../PH -c iotk_print_kinds.f90 make loclib_only make[3]: Entering directory `/home3/colonel/espresso-4.0.5/iotk/src' make[3]: Nothing to be done for `loclib_only'. make[3]: Leaving directory `/home3/colonel/espresso-4.0.5/iotk/src' mpif90 -static -o iotk_print_kinds.x iotk_print_kinds.o libiotk.a /opt/openmpi/lib/libopen-pal.a(dlopen.o): In function `vm_open': (.text+0x123): warning: Using 'dlopen' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking /opt/openmpi/lib/libopen-rte.a(sys_info.o): In function `orte_sys_info': (.text+0x16f): warning: Using 'getpwuid' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking /opt/openmpi/lib/libopen-pal.a(if.o): In function `opal_ifaddrtoname': (.text+0x780): warning: Using 'gethostbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o): In function `__malloc_check_init': (.text+0x1060): multiple definition of `__malloc_check_init' /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o):(.text+0x3a0): first defined here /usr/bin/ld: Warning: size of symbol `__malloc_check_init' changed from 144 in /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o) to 105 in /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o) /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o): In function `_int_free': (.text+0x2230): multiple definition of `_int_free' /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o):(.text+0x1030): first defined here /usr/bin/ld: Warning: size of symbol `_int_free' changed from 1219 in /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o) to 2080 in /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o) /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o): In function `_int_malloc': (.text+0x2a50): multiple definition of `_int_malloc' /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o):(.text+0x18c0): first defined here /usr/bin/ld: Warning: size of symbol `_int_malloc' changed from 3916 in /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o) to 3754 in /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o) /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o): In function `_int_memalign': (.text+0x3900): multiple definition of `_int_memalign' /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o):(.text+0x2d40): first defined here /usr/bin/ld: Warning: size of symbol `_int_memalign' changed from 577 in /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o) to 550 in /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o) /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o): In function `_int_valloc': (.text+0x3c30): multiple definition of `_int_valloc' /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o):(.text+0x2f90): first defined here /usr/bin/ld: Warning: size of symbol `_int_valloc' changed from 378 in /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o) to 62 in /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o) /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o): In function `_int_realloc': (.text+0x3d80): multiple definition of `_int_realloc' /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o):(.text+0x3180): first defined here /usr/bin/ld: Warning: size of symbol `_int_realloc' changed from 875 in /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o) to 1411 in /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o) /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o): In function `free': (.text+0x5f20): multiple definition of `free' /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o):(.text+0x1500): first defined here /usr/bin/ld: Warning: size of symbol `free' changed from 256 in /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o) to 454 in /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o) /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o): In function `malloc': (.text+0x4690): multiple definition of `malloc' /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o):(.text+0x3830): first defined here /usr/bin/ld: Warning: size of symbol `malloc' changed from 363 in /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o) to 466 in /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o) /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o): In function `realloc': (.text+0x60f0): multiple definition of `realloc' /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o):(.text+0x3bc0): first defined here /usr/bin/ld: Warning: size of symbol `realloc' changed from 490 in /opt/openmpi/lib/libopen-pal.a(lt1-malloc.o) to 1184 in /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/../../../../lib64/libc.a(malloc.o) collect2: ld returned 1 exit status make[2]: *** [iotk_print_kinds.x] Error 1 make[2]: Leaving directory `/home3/colonel/espresso-4.0.5/iotk/src' make[1]: *** [lib+util] Error 2 make[1]: Leaving directory `/home3/colonel/espresso-4.0.5/iotk' make: *** [libiotk] Error 2 --------------------------------------------- sincerely On Sun, Sep 6, 2009 at 11:35 PM, <pw_forum-request at pwscf.org> wrote: > Send Pw_forum mailing list submissions to > pw_forum at pwscf.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://www.democritos.it/mailman/listinfo/pw_forum > or, via email, send a message with subject or body 'help' to > pw_forum-request at pwscf.org > > You can reach the person managing the list at > pw_forum-owner at pwscf.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Pw_forum digest..." > > > Today's Topics: > > 1. Re: pw.x running but nothing happens (Bipul Rakshit) > 2. pseudo potential (Mansoureh Pashangpour) > 3. Re: error loading shared libraries on parallel execution > (Paolo Giannozzi) > 4. Re: pw.x running but nothing happens (Lorenzo Paulatto) > 5. Re: Pw_forum Digest, Vol 27, Issue 23 (sreekar guddeti) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 6 Sep 2009 08:03:29 +0100 > From: Bipul Rakshit <bipulrr at gmail.com> > Subject: Re: [Pw_forum] pw.x running but nothing happens > To: PWSCF Forum <pw_forum at pwscf.org> > Message-ID: > <3a749910909060003k38e7f62fn4aa040900b71b42c at mail.gmail.com> > Content-Type: text/plain; charset="gb2312" > > Dear Wangqj, > The same thing happens to me. > since you are using large no. of wfc, although it shows the job is running > in 8 procs, but sometimes if the installation is not proper, it is running > in 1 procs only. > > So better you check the parallel installation using a small job, with > different no. of procs and see whether its taking lesser time as no. of > procs increases or not? > > cheers > > 2009/9/6 wangqj1 <wangqj1 at 126.com> > > > > > Dear pwscf users > > When I run vc-relax on the computing cluster use one node which has > 8 > > CPUs. > > The output file is as following: > > > > Program PWSCF v.4.0.1 starts ... > > Today is 6Sep2009 at 7:49:30 > > Parallel version (MPI) > > Number of processors in use: 8 > > R & G space division: proc/pool = 8 > > For Norm-Conserving or Ultrasoft (Vanderbilt) Pseudopotentials or > PAW > > ..................................................................... > > Initial potential from superposition of free atoms > > starting charge 435.99565, renormalised to 436.00000 > > Starting wfc are 254 atomic + 8 random wfc > > > > After one day ,it still like this and no iteration has completed ,there > is > > also no error was turn up .There is no error in the input file because I > > have test it on anthoer computer which has 4 CPUs and it runs well . > > I can't find the reason about this ,any help will be appreciated . > > Best Regards > > Q.J.Wang > > XiangTan University > > > > > > > > ------------------------------ > > ???????????,www.yeah.net <http://www.yeah.net/?from=footer> > > _______________________________________________ > > Pw_forum mailing list > > Pw_forum at pwscf.org > > http://www.democritos.it/mailman/listinfo/pw_forum > > > > > > > -- > Dr. Bipul Rakshit > Research Associate, > S N Bose Centre for Basic Sciences, > Salt Lake, > Kolkata 700 098 > India > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://www.democritos.it/pipermail/pw_forum/attachments/20090906/197c5bd6/attachment-0001.htm > > ------------------------------ > > Message: 2 > Date: Sun, 6 Sep 2009 11:25:06 +0330 > From: Mansoureh Pashangpour <mansourehp at gmail.com> > Subject: [Pw_forum] pseudo potential > To: PWSCF Forum <pw_forum at pwscf.org> > Message-ID: > <cbe1626b0909060055o66690789yc0ae02efdfa5391e at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > Dear all > how can I plot these pseudo potentials? how can I discribe the properties > of pseudo potentials? > > *Fe.pbe-nd-rrkjus.UPF*< > http://www.pwscf.org/pseudo/1.3/UPF/Fe.pbe-nd-rrkjus.UPF> > ( > *details*< > http://www.pwscf.org/pseudo/upfdetails.php?upf=Fe.pbe-nd-rrkjus.UPF>) > > > Perdew-Burke-Ernzerhof (PBE) exch-corr > nonlinear core-correction > semicore state d in valence > Rabe Rappe Kaxiras Joannopoulos (ultrasoft) > > and > > *H.pbe-rrkjus.UPF* <http://www.pwscf.org/pseudo/1.3/UPF/H.pbe-rrkjus.UPF> > (*details* < > http://www.pwscf.org/pseudo/upfdetails.php?upf=H.pbe-rrkjus.UPF>) > > > Perdew-Burke-Ernzerhof (PBE) exch-corr > Rabe Rappe Kaxiras Joannopoulos (ultrasoft) > > > and > > *O.pbe-rrkjus.UPF* > <http://www.pwscf.org/pseudo/1.3/UPF/O.pbe-rrkjus.UPF> (*details* > <http://www.pwscf.org/pseudo/upfdetails.php?upf=O.pbe-rrkjus.UPF>) > > Perdew-Burke-Ernzerhof (PBE) exch-corr > Rabe Rappe Kaxiras Joannopoulos (ultrasoft) > > > Thanks > Mansoureh Pashangpour > Ph.D student > Islami Azad university > science & reaserch branch > Tehran, IRAN > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://www.democritos.it/pipermail/pw_forum/attachments/20090906/3efcd6f0/attachment-0001.htm > > ------------------------------ > > Message: 3 > Date: Sun, 6 Sep 2009 10:24:48 +0200 > From: Paolo Giannozzi <giannozz at democritos.it> > Subject: Re: [Pw_forum] error loading shared libraries on parallel > execution > To: PWSCF Forum <pw_forum at pwscf.org> > Message-ID: <E6918F77-0C97-4F26-9F7E-BE790E9EFC6B at democritos.it> > Content-Type: text/plain; charset=US-ASCII; format=flowed > > > On Sep 6, 2009, at 1:20 , sreekar guddeti wrote: > > > plz suggest solutions which donot require root permissions , > > as i dont have > > somebody must have it. Report the problem and the > solution (i.e. install gfortran on ALL processors) to > whoever has root access. As an alternative, try static > link (add -static to LDFLAGS in make.sys). > --- > Paolo Giannozzi, Dept of Physics, University of Udine > via delle Scienze 208, 33100 Udine, Italy > Phone +39-0432-558216, fax +39-0432-558222 > > > > > > ------------------------------ > > Message: 4 > Date: Sun, 6 Sep 2009 12:20:40 +0200 (CEST) > From: "Lorenzo Paulatto" <paulatto at sissa.it> > Subject: Re: [Pw_forum] pw.x running but nothing happens > To: "PWSCF Forum" <pw_forum at pwscf.org> > Message-ID: <46134.78.12.159.112.1252232440.squirrel at webmail.sissa.it> > Content-Type: text/plain;charset=iso-8859-1 > > > On Sun, September 6, 2009 02:33, wangqj1 wrote: > > After one day ,it still like this and no iteration has completed ,there > > is > > also no error was turn up .There is no error in the input file because I > > have test it on anthoer computer which has 4 CPUs and it runs well . > > I can't find the reason about this ,any help will be appreciated . > > > Dear QJ, > this is strange. But we would need more information on your hardwre > configuration in order to help you. In the mean while you can check the > behaviour of te pw.x processes with "top". E.g. if they are all runing at > 100% CPU, how much memory they are taking and so on. > > regards > > > -- > Lorenzo Paulatto > SISSA & DEMOCRITOS (Trieste) > phone: +39 040 3787 511 > skype: paulatz > www: http://people.sissa.it/~paulatto/<http://people.sissa.it/%7Epaulatto/> > > > > ---------------------------------------------------------------- > SISSA Webmail https://webmail.sissa.it/ > Powered by SquirrelMail http://www.squirrelmail.org/ > > > > ------------------------------ > > Message: 5 > Date: Sun, 6 Sep 2009 23:35:26 +0530 > From: sreekar guddeti <colonel.sreekar at gmail.com> > Subject: Re: [Pw_forum] Pw_forum Digest, Vol 27, Issue 23 > To: pw_forum at pwscf.org > Message-ID: > <c864e4460909061105i43b1885dh6141b2789b8ae51 at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > @duy lee > i inserted the line #$ -V in my qsub script and the env variable > $LD_LIBRARY_PATH is being set from script... thanks for that but still > problem persists. > > @rakshit > > ----------------------------- > $find /usr/lib -name libgfortran* > ----------------------------- > and output is > _______________________ > /usr/lib/libgfortran.so.1.0.0 > /usr/lib/libgfortran.so.1 > /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/libgfortran.a > /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/libgfortranbegin.a > /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/libgfortran.so > /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/32/libgfortran.a > /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/32/libgfortranbegin.a > /usr/lib/gcc/x86_64-redhat-linux6E/4.3.2/32/libgfortran.so > _______________________ > whereas on doing > ----------------------------- > $find /usr/lib64 -name libgfortran* > ----------------------------- > ouput is > _________________ > /usr/lib64/libgfortran.so.3.0.0 > /usr/lib64/libgfortran.so.1.0.0 > /usr/lib64/libgfortran.so.1 > find: /usr/lib64/audit: Permission denied > */usr/lib64/libgfortran.so.3* > _________________ > > it means my OS has the required library, i guess > > i installed the QE on the head node > This cluster is a Rocks cluster with > > # of nodes: 10 (1 head node + 9 compute nodes) > # of processors/node: 8 > # Total # of processors: 10X8 = 80 > > i tested the sample program for submitting batch jobs using SGE utility and > it is working fine( > > http://www.rocksclusters.org/roll-documentation/sge/5.2/submitting-batch-jobs.html > ) > > sincerely, > sreekar guddeti > Dept. Physics > IIT Bombay > India > > > On Sun, Sep 6, 2009 at 12:28 PM, <pw_forum-request at pwscf.org> wrote: > > > Send Pw_forum mailing list submissions to > > pw_forum at pwscf.org > > > > To subscribe or unsubscribe via the World Wide Web, visit > > http://www.democritos.it/mailman/listinfo/pw_forum > > or, via email, send a message with subject or body 'help' to > > pw_forum-request at pwscf.org > > > > You can reach the person managing the list at > > pw_forum-owner at pwscf.org > > > > When replying, please edit your Subject line so it is more specific > > than "Re: Contents of Pw_forum digest..." > > > > > > Today's Topics: > > > > 1. Re: error loading shared libraries on parallel execution (Duy Le) > > 2. pw.x running but nothing happens (wangqj1) > > 3. Re: error loading shared libraries on parallel execution > > (Bipul Rakshit) > > > > > > ---------------------------------------------------------------------- > > > > Message: 1 > > Date: Sat, 5 Sep 2009 19:39:17 -0400 > > From: Duy Le <ttduyle at gmail.com> > > Subject: Re: [Pw_forum] error loading shared libraries on parallel > > execution > > To: PWSCF Forum <pw_forum at pwscf.org> > > Message-ID: > > <8974d3b20909051639u575aed19xdf474c53f9a7d877 at mail.gmail.com> > > Content-Type: text/plain; charset="iso-8859-1" > > > > Hi,I am not sure if this help. Could you please try to add > > #$ -V in your submitting script. Like this: > > > > #!/bin/bash > > # > > #$ -V > > #$ -cwd > > #$ -j y > > #$ -S /bin/bash > > # > > > > Good luck. > > D. > > > > On Sat, Sep 5, 2009 at 7:20 PM, sreekar guddeti > > <colonel.sreekar at gmail.com>wrote: > > > > > i know this issuehas been addressed and documented in troubleshooting > > > section of the users guide. > > > but i giveup in despair trying for a whole day to figure this problem > > > i run my jobs on rocks cluster by using SGE's facility of submitting > > batch > > > jobs > > > > > > > > > http://www.rocksclusters.org/roll-documentation/sge/5.2/submitting-batch-jobs.html > > > > > > what i 'apparently' observe(or doubtfully infer) is that i can > > successfully > > > run a single parallel job, but on submitting a second job i get the > error > > > ____________________________________________ > > > /home3/colonel/espresso-4.0.5/bin/pw.x: error while loading shared > > > libraries: libgfortran.so.3: cannot open shared object file: No such > > > file or directory > > > ____________________________________________ > > > > > > > > > i find out the path for the library and added to the LD_LIBRARY_PATH by > > > writing > > > _______________________________________ > > > #set the library path to include gfortran libraries > > > export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64:/usr/lib > > > > > > > > > $ECHO > > > $ECHO "$LD_LIBRARY_PATH" > > > $ECHO > > > _______________________________________ > > > > > > in the file which acts as script for qsub ($qsub -pe orte 4 > > > dosroutine.qsub) which is > > > > > > dosroutine.qsub > > > ----------------------------------------------------------------- > > > > > > #!/bin/bash > > > # > > > #$ -cwd > > > #$ -j y > > > #$ -S /bin/bash > > > # > > > > > > #extract the info about no of processors involved from command line > > > arguments of 'qsub' > > > PROCESSORS=$NSLOTS > > > > > > #heuristically assign the no of processors per pool NPR > > > NPR=4 > > > #as a result no of pools are give by > > > NPK=`expr $PROCESSORS / $NPR` > > > > > > #!/bin/bash > > > # > > > # > > > #Script for performing a dos calculation on a parallel processor > > > WORKINGDIR=`pwd` > > > ECHO="echo" > > > > > > #set the library path to include gfortran libraries > > > export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64:/usr/lib > > > > > > > > > $ECHO > > > $ECHO "$LD_LIBRARY_PATH" > > > $ECHO > > > > > > # set the needed environment variables > > > > > > PREFIX=`cd /home3/colonel/espresso-4.0.5 ; pwd` > > > $ECHO $PREFIX > > > BIN_DIR=$PREFIX/bin > > > PSEUDO_DIR=$PREFIX/pseudo > > > TMP_DIR=$HOME/tmp > > > PARA_PREFIX="/opt/openmpi/bin/mpirun -np $PROCESSORS" > > > PARA_POSTFIX="-npool $NPK" > > > > > > # required executables and pseudopotentials > > > $ECHO > > > $ECHO " executables directory: $BIN_DIR" > > > $ECHO " pseudo directory: $PSEUDO_DIR" > > > $ECHO " temporary directory: $TMP_DIR" > > > > > > #create results directory > > > for DIR in "$TMP_DIR" "$WORKINGDIR/results" ; do > > > if test ! -d $DIR ; then > > > mkdir $DIR > > > fi > > > done > > > cd $WORKINGDIR/results > > > > > > > > > # variables to represent programs > > > PW_COMMAND="$PARA_PREFIX $BIN_DIR/pw.x $PARA_POSTFIX" > > > DOS_COMMAND="$PARA_PREFIX $BIN_DIR/dos.x $PARA_POSTFIX" > > > PROJWFC_COMMAND="$PARA_PREFIX $BIN_DIR/projwfc.x $PARA_POSTFIX" > > > > > > > > > # DOS calculation for 0Ni0 > > > cat > 0ni0.dos.in << EOF > > > &control > > > calculation='nscf' > > > restart_mode='from_scratch', > > > prefix='0ni0', > > > pseudo_dir = '$PSEUDO_DIR/', > > > outdir='$TMP_DIR/' > > > / > > > &system > > > ibrav=2, celldm(1) =6.48, nat=1, ntyp=1, > > > nspin = 2, starting_magnetization(1)=0.7, > > > ecutwfc = 24.0, ecutrho = 288.0, nbnd=8, > > > occupations='tetrahedra' > > > / > > > &electrons > > > conv_thr = 1.0e-10 > > > mixing_beta = 0.7 > > > / > > > ATOMIC_SPECIES > > > Ni 58.69 NiUS.RRKJ3.UPF > > > ATOMIC_POSITIONS > > > Ni 0.0 0.0 0.0 > > > K_POINTS {automatic} !special points generated by tetrahedra > > method > > > 12 12 12 0 0 0 > > > EOF > > > > > > $ECHO " running DOS calculation for 0Ni0 ...\c" > > > $PW_COMMAND < 0ni0.dos.in > 0ni0.dos.out > > > $ECHO > > > $ECHO " done" > > > > > > ------------------------------------------------------------- > > > > > > the output i get is > > > > > > ************************************************ > > > :/usr/lib64:/usr/lib > > > > > > /home3/colonel/espresso-4.0.5 > > > > > > executables directory: /home3/colonel/espresso-4.0.5/bin > > > pseudo directory: /home3/colonel/espresso-4.0.5/pseudo > > > temporary directory: /home3/colonel/tmp > > > running DOS calculation for 0Ni0 ...\c > > > /home3/colonel/espresso-4.0.5/bin/pw.x: error while loading shared > > > libraries: libgfortran.so.3: cannot open shared object file: No such > > > file or directory > > > /home3/colonel/espresso-4.0.5/bin/pw.x: error while loading shared > > > libraries: libgfortran.so.3: cannot open shared object file: No such > > > file or directory > > > /home3/colonel/espresso-4.0.5/bin/pw.x: error while loading shared > > > libraries: libgfortran.so.3: cannot open shared object file: No such > > > file or directory > > > /home3/colonel/espresso-4.0.5/bin/pw.x: error while loading shared > > > libraries: libgfortran.so.3: cannot open shared object file: No such > > > file or directory > > > > > > done > > > ************************************************* > > > plz suggest solutions which donot require root permissions , as i dont > > have > > > thanks in advance > > > -- > > > Sreekar Guddeti > > > Department of Physics > > > > > > _______________________________________________ > > > Pw_forum mailing list > > > Pw_forum at pwscf.org > > > http://www.democritos.it/mailman/listinfo/pw_forum > > > > > > > > > > > > -- > > -------------------------------------------------- > > Duy Le > > PhD Student > > Department of Physics > > University of Central Florida. > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: > > > http://www.democritos.it/pipermail/pw_forum/attachments/20090905/010fb1e3/attachment-0001.htm > > > > ------------------------------ > > > > Message: 2 > > Date: Sun, 6 Sep 2009 08:33:25 +0800 (CST) > > From: wangqj1 <wangqj1 at 126.com> > > Subject: [Pw_forum] pw.x running but nothing happens > > To: pw_forum <pw_forum at pwscf.org> > > Message-ID: > > <25470012.218281252197205181.JavaMail.coremail at bj126app52.126.com > > > > Content-Type: text/plain; charset="gbk" > > > > > > Dear pwscf users > > When I run vc-relax on the computing cluster use one node which has 8 > > CPUs. > > The output file is as following: > > > > Program PWSCF v.4.0.1 starts ... > > Today is 6Sep2009 at 7:49:30 > > Parallel version (MPI) > > Number of processors in use: 8 > > R & G space division: proc/pool = 8 > > For Norm-Conserving or Ultrasoft (Vanderbilt) Pseudopotentials or PAW > > ..................................................................... > > Initial potential from superposition of free atoms > > starting charge 435.99565, renormalised to 436.00000 > > Starting wfc are 254 atomic + 8 random wfc > > > > After one day ,it still like this and no iteration has completed ,there > is > > also no error was turn up .There is no error in the input file because I > > have test it on anthoer computer which has 4 CPUs and it runs well . > > I can't find the reason about this ,any help will be appreciated . > > Best Regards > > Q.J.Wang > > XiangTan University > > > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: > > > http://www.democritos.it/pipermail/pw_forum/attachments/20090906/a4269bb4/attachment-0001.htm > > > > ------------------------------ > > > > Message: 3 > > Date: Sun, 6 Sep 2009 07:58:49 +0100 > > From: Bipul Rakshit <bipulrr at gmail.com> > > Subject: Re: [Pw_forum] error loading shared libraries on parallel > > execution > > To: PWSCF Forum <pw_forum at pwscf.org> > > Message-ID: <3a749910909052358m6f23c3sf5bdbba1b595d8d5 at mail.gmail.com> > > Content-Type: text/plain; charset="iso-8859-1" > > > > hi, > > Just from root user type > > yum install libgfortran.so.3 > > > > then it will install this files which is not present in your machine > > > > On Sun, Sep 6, 2009 at 12:20 AM, sreekar guddeti > > <colonel.sreekar at gmail.com>wrote: > > > > > i know this issuehas been addressed and documented in troubleshooting > > > section of the users guide. > > > but i giveup in despair trying for a whole day to figure this problem > > > i run my jobs on rocks cluster by using SGE's facility of submitting > > batch > > > jobs > > > > > > > > > http://www.rocksclusters.org/roll-documentation/sge/5.2/submitting-batch-jobs.html > > > > > > what i 'apparently' observe(or doubtfully infer) is that i can > > successfully > > > run a single parallel job, but on submitting a second job i get the > error > > > ____________________________________________ > > > /home3/colonel/espresso-4.0.5/bin/pw.x: error while loading shared > > > libraries: libgfortran.so.3: cannot open shared object file: No such > > > file or directory > > > ____________________________________________ > > > > > > > > > i find out the path for the library and added to the LD_LIBRARY_PATH by > > > writing > > > _______________________________________ > > > #set the library path to include gfortran libraries > > > export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64:/usr/lib > > > > > > > > > $ECHO > > > $ECHO "$LD_LIBRARY_PATH" > > > $ECHO > > > _______________________________________ > > > > > > in the file which acts as script for qsub ($qsub -pe orte 4 > > > dosroutine.qsub) which is > > > > > > dosroutine.qsub > > > ----------------------------------------------------------------- > > > > > > #!/bin/bash > > > # > > > #$ -cwd > > > #$ -j y > > > #$ -S /bin/bash > > > # > > > > > > #extract the info about no of processors involved from command line > > > arguments of 'qsub' > > > PROCESSORS=$NSLOTS > > > > > > #heuristically assign the no of processors per pool NPR > > > NPR=4 > > > #as a result no of pools are give by > > > NPK=`expr $PROCESSORS / $NPR` > > > > > > #!/bin/bash > > > # > > > # > > > #Script for performing a dos calculation on a parallel processor > > > WORKINGDIR=`pwd` > > > ECHO="echo" > > > > > > #set the library path to include gfortran libraries > > > export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64:/usr/lib > > > > > > > > > $ECHO > > > $ECHO "$LD_LIBRARY_PATH" > > > $ECHO > > > > > > # set the needed environment variables > > > > > > PREFIX=`cd /home3/colonel/espresso-4.0.5 ; pwd` > > > $ECHO $PREFIX > > > BIN_DIR=$PREFIX/bin > > > PSEUDO_DIR=$PREFIX/pseudo > > > TMP_DIR=$HOME/tmp > > > PARA_PREFIX="/opt/openmpi/bin/mpirun -np $PROCESSORS" > > > PARA_POSTFIX="-npool $NPK" > > > > > > # required executables and pseudopotentials > > > $ECHO > > > $ECHO " executables directory: $BIN_DIR" > > > $ECHO " pseudo directory: $PSEUDO_DIR" > > > $ECHO " temporary directory: $TMP_DIR" > > > > > > #create results directory > > > for DIR in "$TMP_DIR" "$WORKINGDIR/results" ; do > > > if test ! -d $DIR ; then > > > mkdir $DIR > > > fi > > > done > > > cd $WORKINGDIR/results > > > > > > > > > # variables to represent programs > > > PW_COMMAND="$PARA_PREFIX $BIN_DIR/pw.x $PARA_POSTFIX" > > > DOS_COMMAND="$PARA_PREFIX $BIN_DIR/dos.x $PARA_POSTFIX" > > > PROJWFC_COMMAND="$PARA_PREFIX $BIN_DIR/projwfc.x $PARA_POSTFIX" > > > > > > > > > # DOS calculation for 0Ni0 > > > cat > 0ni0.dos.in << EOF > > > &control > > > calculation='nscf' > > > restart_mode='from_scratch', > > > prefix='0ni0', > > > pseudo_dir = '$PSEUDO_DIR/', > > > outdir='$TMP_DIR/' > > > / > > > &system > > > ibrav=2, celldm(1) =6.48, nat=1, ntyp=1, > > > nspin = 2, starting_magnetization(1)=0.7, > > > ecutwfc = 24.0, ecutrho = 288.0, nbnd=8, > > > occupations='tetrahedra' > > > / > > > &electrons > > > conv_thr = 1.0e-10 > > > mixing_beta = 0.7 > > > / > > > ATOMIC_SPECIES > > > Ni 58.69 NiUS.RRKJ3.UPF > > > ATOMIC_POSITIONS > > > Ni 0.0 0.0 0.0 > > > K_POINTS {automatic} !special points generated by tetrahedra > > method > > > 12 12 12 0 0 0 > > > EOF > > > > > > $ECHO " running DOS calculation for 0Ni0 ...\c" > > > $PW_COMMAND < 0ni0.dos.in > 0ni0.dos.out > > > $ECHO > > > $ECHO " done" > > > > > > ------------------------------------------------------------- > > > > > > the output i get is > > > > > > ************************************************ > > > :/usr/lib64:/usr/lib > > > > > > /home3/colonel/espresso-4.0.5 > > > > > > executables directory: /home3/colonel/espresso-4.0.5/bin > > > pseudo directory: /home3/colonel/espresso-4.0.5/pseudo > > > temporary directory: /home3/colonel/tmp > > > running DOS calculation for 0Ni0 ...\c > > > /home3/colonel/espresso-4.0.5/bin/pw.x: error while loading shared > > > libraries: libgfortran.so.3: cannot open shared object file: No such > > > file or directory > > > /home3/colonel/espresso-4.0.5/bin/pw.x: error while loading shared > > > libraries: libgfortran.so.3: cannot open shared object file: No such > > > file or directory > > > /home3/colonel/espresso-4.0.5/bin/pw.x: error while loading shared > > > libraries: libgfortran.so.3: cannot open shared object file: No such > > > file or directory > > > /home3/colonel/espresso-4.0.5/bin/pw.x: error while loading shared > > > libraries: libgfortran.so.3: cannot open shared object file: No such > > > file or directory > > > > > > done > > > ************************************************* > > > plz suggest solutions which donot require root permissions , as i dont > > have > > > thanks in advance > > > -- > > > Sreekar Guddeti > > > Department of Physics > > > > > > _______________________________________________ > > > Pw_forum mailing list > > > Pw_forum at pwscf.org > > > http://www.democritos.it/mailman/listinfo/pw_forum > > > > > > > > > > > > -- > > Dr. Bipul Rakshit > > Research Associate, > > S N Bose Centre for Basic Sciences, > > Salt Lake, > > Kolkata 700 098 > > India > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: > > > http://www.democritos.it/pipermail/pw_forum/attachments/20090906/d919f229/attachment.htm > > > > ------------------------------ > > > > _______________________________________________ > > Pw_forum mailing list > > Pw_forum at pwscf.org > > http://www.democritos.it/mailman/listinfo/pw_forum > > > > > > End of Pw_forum Digest, Vol 27, Issue 23 > > **************************************** > > > > > > -- > Sreekar Guddeti > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://www.democritos.it/pipermail/pw_forum/attachments/20090906/b1909ffa/attachment.htm > > ------------------------------ > > _______________________________________________ > Pw_forum mailing list > Pw_forum at pwscf.org > http://www.democritos.it/mailman/listinfo/pw_forum > > > End of Pw_forum Digest, Vol 27, Issue 24 > **************************************** > -- Sreekar Guddeti -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.democritos.it/pipermail/pw_forum/attachments/20090907/f9410638/attachment-0001.htm