Re: [OMPI users] Configuration with Intel C++ Composer 12.0.2 on OSX 10.7.5

2013-05-16 Thread Tim Prince

On 5/16/2013 2:16 PM, Geraldine Hochman-Klarenberg wrote:
Maybe I should add that my Intel C++ and Fortran compilers are 
different versions. C++ is 12.0.2 and Fortran is 13.0.2. Could that be 
an issue? Also, when I check for the location of ifort, it seems to be 
in usr/bin - which is different than the C compiler (even though I 
have folders /opt/intel/composer_xe_2013 and 
/opt/intel/composer_xe_2013.3.171 etc.). And I have tried /source 
/opt/intel/bin/ifortvars.sh intel64/ too.


Geraldine


On May 16, 2013, at 11:57 AM, Geraldine Hochman-Klarenberg wrote:



I am having trouble configuring OpenMPI-1.6.4 with the Intel C/C++ 
composer (12.0.2). My OS is OSX 10.7.5.


I am not a computer whizz so I hope I can explain what I did properly:

1) In bash, I did /source /opt/intel/bin/compilervars.sh intel64/
and then /echo PATH/ showed:
//opt/intel/composerxe-2011.2.142/bin/intel64:/opt/intel/composerxe-2011.2.142/mpirt/bin/intel64:/opt/intel/composerxe-2011.2.142/bin:/Library/Frameworks/EPD64.framework/Versions/Current/bin:/Library/Frameworks/Python.framework/Versions/Current/bin:.:/Library/Frameworks/EPD64.framework/Versions/Current/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin/
/
/
2)/which icc /and /which icpc /showed:
//opt/intel/composerxe-2011.2.142/bin/intel64/icc/
and
//opt/intel/composerxe-2011.2.142/bin/intel64/icpc/
/
/
So that all seems okay to me. Still when I do
/./configure CC=icc CXX=icpc F77=ifort FC=ifort 
--prefix=/opt/openmpi-1.6.4/

from the folder in which the extracted OpenMPI files sit, I get

//
/== Configuring Open MPI/
//
/
/
/*** Startup tests/
/checking build system type... x86_64-apple-darwin11.4.2/
/checking host system type... x86_64-apple-darwin11.4.2/
/checking target system type... x86_64-apple-darwin11.4.2/
/checking for gcc... icc/
/checking whether the C compiler works... no/
/configure: error: in 
`/Users/geraldinehochman-klarenberg/Projects/openmpi-1.6.4':/

/configure: error: C compiler cannot create executables/
/See `config.log' for more details/
/
/


You do need to examine config.log and show it to us if you don't 
understand it.
Attempting to use the older C compiler and libraries to link  .o files 
made by the newer Fortran is likely to fail.
If you wish to attempt this, assuming the Intel compilers are installed 
in default directories, I would suggest you source the environment 
setting for the older compiler, then the newer one, so that the newer 
libraries will be found first and the older ones used only when they 
aren't duplicated by the newer ones.

You also need the 64-bit g++ active.

--
Tim Prince



Re: [OMPI users] Configuration with Intel C++ Composer 12.0.2 on OSX 10.7.5

2013-05-16 Thread Ralph Castain
FWIW: my Mac is running 10.8.3 and works fine - though the xcode reqt is quite 
true.


On May 16, 2013, at 4:14 PM, Gus Correa  wrote:

> Hi Geraldine
> 
> I haven't had much luck with OpenMPI 1.6.4 on a Mac OS X.
> OMPI 1.6.4 built with gcc (no Fortran), but it would have
> memory problems at runtime.
> However, my Mac is much older than yours (OS X 10.6.8) and 32 bit,
> not a good comparison.
> In any case, take my suggestions with a grain of salt.
> 
> 1) I remember that you need to install X-code beforehand,
> to have the right Mac development environment, header files, etc.
> You can get X-code from Apple.
> Did you install it?
> 
> 2) With X-code installed, try to rebuild OMPI from scratch.
> Do a "make distclean" at least,
> or maybe untar the OMPI tarball again and start fresh.
> 
> 3) There is some information that you can send to the list,
> which may help the OMPI developers help you.
> The config.log at least.
> Check this FAQ:
> http://www.open-mpi.org/community/help/
> 
> 4) If using the Intel compilers, I would try to keep the same
> release/version on all of them, not mix 13.X.Y with 12.W.Z.
> However, the error message you sent seems to have happened
> very early during the configure step, and the
> compiler version mix is probably not the reason.
> 
> I hope this helps,
> Gus Correa
> 
> 
> On 05/16/2013 02:16 PM, Geraldine Hochman-Klarenberg wrote:
>> Maybe I should add that my Intel C++ and Fortran compilers are different
>> versions. C++ is 12.0.2 and Fortran is 13.0.2. Could that be an issue?
>> Also, when I check for the location of ifort, it seems to be in usr/bin
>> - which is different than the C compiler (even though I have folders
>> /opt/intel/composer_xe_2013 and /opt/intel/composer_xe_2013.3.171 etc.).
>> And I have tried /source /opt/intel/bin/ifortvars.sh intel64/ too.
>> 
>> Geraldine
>> 
>> 
>> On May 16, 2013, at 11:57 AM, Geraldine Hochman-Klarenberg wrote:
>> 
>>> 
>>> I am having trouble configuring OpenMPI-1.6.4 with the Intel C/C++
>>> composer (12.0.2). My OS is OSX 10.7.5.
>>> 
>>> I am not a computer whizz so I hope I can explain what I did properly:
>>> 
>>> 1) In bash, I did /source /opt/intel/bin/compilervars.sh intel64/
>>> and then /echo PATH/ showed:
>>> //opt/intel/composerxe-2011.2.142/bin/intel64:/opt/intel/composerxe-2011.2.142/mpirt/bin/intel64:/opt/intel/composerxe-2011.2.142/bin:/Library/Frameworks/EPD64.framework/Versions/Current/bin:/Library/Frameworks/Python.framework/Versions/Current/bin:.:/Library/Frameworks/EPD64.framework/Versions/Current/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin/
>>> /
>>> /
>>> 2)/which icc /and /which icpc /showed:
>>> //opt/intel/composerxe-2011.2.142/bin/intel64/icc/
>>> and
>>> //opt/intel/composerxe-2011.2.142/bin/intel64/icpc/
>>> /
>>> /
>>> So that all seems okay to me. Still when I do
>>> /./configure CC=icc CXX=icpc F77=ifort FC=ifort
>>> --prefix=/opt/openmpi-1.6.4/
>>> from the folder in which the extracted OpenMPI files sit, I get
>>> 
>>> //
>>> /== Configuring Open MPI/
>>> //
>>> /
>>> /
>>> /*** Startup tests/
>>> /checking build system type... x86_64-apple-darwin11.4.2/
>>> /checking host system type... x86_64-apple-darwin11.4.2/
>>> /checking target system type... x86_64-apple-darwin11.4.2/
>>> /checking for gcc... icc/
>>> /checking whether the C compiler works... no/
>>> /configure: error: in
>>> `/Users/geraldinehochman-klarenberg/Projects/openmpi-1.6.4':/
>>> /configure: error: C compiler cannot create executables/
>>> /See `config.log' for more details/
>>> /
>>> /
>>> I'd really appreciate any pointers on how to solve this, because I'm
>>> running out of ideas on how to solve this (and so seems Google).
>>> 
>>> Thanks!
>>> Geraldine
>>> ___
>>> users mailing list
>>> us...@open-mpi.org 
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> 
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Configuration with Intel C++ Composer 12.0.2 on OSX 10.7.5

2013-05-16 Thread Gus Correa

Hi Geraldine

I haven't had much luck with OpenMPI 1.6.4 on a Mac OS X.
OMPI 1.6.4 built with gcc (no Fortran), but it would have
memory problems at runtime.
However, my Mac is much older than yours (OS X 10.6.8) and 32 bit,
not a good comparison.
In any case, take my suggestions with a grain of salt.

1) I remember that you need to install X-code beforehand,
to have the right Mac development environment, header files, etc.
You can get X-code from Apple.
Did you install it?

2) With X-code installed, try to rebuild OMPI from scratch.
Do a "make distclean" at least,
or maybe untar the OMPI tarball again and start fresh.

3) There is some information that you can send to the list,
which may help the OMPI developers help you.
The config.log at least.
Check this FAQ:
http://www.open-mpi.org/community/help/

4) If using the Intel compilers, I would try to keep the same
release/version on all of them, not mix 13.X.Y with 12.W.Z.
However, the error message you sent seems to have happened
very early during the configure step, and the
compiler version mix is probably not the reason.

I hope this helps,
Gus Correa


On 05/16/2013 02:16 PM, Geraldine Hochman-Klarenberg wrote:

Maybe I should add that my Intel C++ and Fortran compilers are different
versions. C++ is 12.0.2 and Fortran is 13.0.2. Could that be an issue?
Also, when I check for the location of ifort, it seems to be in usr/bin
- which is different than the C compiler (even though I have folders
/opt/intel/composer_xe_2013 and /opt/intel/composer_xe_2013.3.171 etc.).
And I have tried /source /opt/intel/bin/ifortvars.sh intel64/ too.

Geraldine


On May 16, 2013, at 11:57 AM, Geraldine Hochman-Klarenberg wrote:



I am having trouble configuring OpenMPI-1.6.4 with the Intel C/C++
composer (12.0.2). My OS is OSX 10.7.5.

I am not a computer whizz so I hope I can explain what I did properly:

1) In bash, I did /source /opt/intel/bin/compilervars.sh intel64/
and then /echo PATH/ showed:
//opt/intel/composerxe-2011.2.142/bin/intel64:/opt/intel/composerxe-2011.2.142/mpirt/bin/intel64:/opt/intel/composerxe-2011.2.142/bin:/Library/Frameworks/EPD64.framework/Versions/Current/bin:/Library/Frameworks/Python.framework/Versions/Current/bin:.:/Library/Frameworks/EPD64.framework/Versions/Current/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin/
/
/
2)/which icc /and /which icpc /showed:
//opt/intel/composerxe-2011.2.142/bin/intel64/icc/
and
//opt/intel/composerxe-2011.2.142/bin/intel64/icpc/
/
/
So that all seems okay to me. Still when I do
/./configure CC=icc CXX=icpc F77=ifort FC=ifort
--prefix=/opt/openmpi-1.6.4/
from the folder in which the extracted OpenMPI files sit, I get

//
/== Configuring Open MPI/
//
/
/
/*** Startup tests/
/checking build system type... x86_64-apple-darwin11.4.2/
/checking host system type... x86_64-apple-darwin11.4.2/
/checking target system type... x86_64-apple-darwin11.4.2/
/checking for gcc... icc/
/checking whether the C compiler works... no/
/configure: error: in
`/Users/geraldinehochman-klarenberg/Projects/openmpi-1.6.4':/
/configure: error: C compiler cannot create executables/
/See `config.log' for more details/
/
/
I'd really appreciate any pointers on how to solve this, because I'm
running out of ideas on how to solve this (and so seems Google).

Thanks!
Geraldine
___
users mailing list
us...@open-mpi.org 
http://www.open-mpi.org/mailman/listinfo.cgi/users




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Subject: Building openmpi-1.6.4 with64-bit integers

2013-05-16 Thread H Hogreve
Dear Jeff::

Many thanks for your kind response. Since before changing to
version 1.6.4, I had installed (via "apt-get install") a 1.5 version
of openmpi, I also was suspicious that ompi_info was referring
to remnants of this old mpi version, though I did my best of
removing it. Nonetheless, when cheching again, actually there
still was an ompi_info in /usr/bin from this previous installation.
Upon removing these ompi-s from /usr/bin, the ompi_info
now indeed yields the desired result:Fort integer size: 8

Thanks again and best wishes, Hans H.

- Original Message -
From: "Jeff Squyres (jsquyres)" 
To: "Open MPI Users" 
Sent: Thursday, May 16, 2013 3:33 PM
Subject: Re: [OMPI users] Subject: Building openmpi-1.6.4 with64-bit
integers


> Looking at your config.log, it looks like OMPI correctly determined that
the Fortran INTEGER size is 8.  I see statements like this:
>
> #define OMPI_SIZEOF_FORTRAN_INTEGER 8
>
> Are you sure that you're running the ompi_info that you just installed?
Can you double check to see that there's not another ompi_info somewhere in
your path that you're accidentally executing?
>
>
> On May 15, 2013, at 9:58 PM, H Hogreve 
>  wrote:
>
> > Dear Jeff:
> >
> > Attached please find the compressed  Config.log  file;
> > perhaps it already might provide some indications for
> > the problem encountered. There are several entries
> > "compilation aborted for conftest.c", but I don't know
> > whether this is of importance.
> > Many thanks and best wishes, Hans H.
> >
> > - Original Message -
> > From: "Jeff Squyres (jsquyres)" 
> > To: "Open MPI Users" 
> > Sent: Thursday, May 16, 2013 2:49 AM
> > Subject: Re: [OMPI users] Subject: Building openmpi-1.6.4 with 64-bit
> > integers
> >
> >
> >> Can you send all the information listed here:
> >>
> >>http://www.open-mpi.org/community/help/
> >>
> >>
> >> On May 15, 2013, at 7:42 PM, H Hogreve 
> >> wrote:
> >>
> >>>
> >>> Dear mpi team / users:
> >>>
> >>> To get a mpi with 64-bit integers (linux system:
> >>> ubuntu 12.04) I invoked the following
> >>> configuration options:
> >>>
> >>> ./configure --prefix=/opt/openmpi CXX=icpc CC=icc F77=ifort FC=ifort
> >>> FFLAGS=-i8 FCFLAGS=-i8
> >>>
> >>> The subsequent make/install scripts apparently
> >>> went through smoothly, but when I check
> >>>
> >>> ompi_info -a | grep 'Fort integer size'
> >>>
> >>> the result reads:
> >>>
> >>> Fort integer size: 4
> >>>
> >>> What went awry?
> >>> For all hints and suggestions many thanks in advance,
> >>> Hans H.
> >>>
> >>> ___



Re: [OMPI users] Configuration with Intel C++ Composer 12.0.2 on OSX 10.7.5

2013-05-16 Thread Geraldine Hochman-Klarenberg
Maybe I should add that my Intel C++ and Fortran compilers are different 
versions. C++ is 12.0.2 and Fortran is 13.0.2. Could that be an issue? Also, 
when I check for the location of ifort, it seems to be in usr/bin - which is 
different than the C compiler (even though I have folders 
/opt/intel/composer_xe_2013 and /opt/intel/composer_xe_2013.3.171 etc.). And I 
have tried source /opt/intel/bin/ifortvars.sh intel64 too.

Geraldine


On May 16, 2013, at 11:57 AM, Geraldine Hochman-Klarenberg wrote:

> 
> I am having trouble configuring OpenMPI-1.6.4 with the Intel C/C++ composer 
> (12.0.2). My OS is OSX 10.7.5.
> 
> I am not a computer whizz so I hope I can explain what I did properly:
> 
> 1) In bash, I did source /opt/intel/bin/compilervars.sh intel64 
> and then echo PATH showed: 
> /opt/intel/composerxe-2011.2.142/bin/intel64:/opt/intel/composerxe-2011.2.142/mpirt/bin/intel64:/opt/intel/composerxe-2011.2.142/bin:/Library/Frameworks/EPD64.framework/Versions/Current/bin:/Library/Frameworks/Python.framework/Versions/Current/bin:.:/Library/Frameworks/EPD64.framework/Versions/Current/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin
> 
> 2) which icc and which icpc showed:
> /opt/intel/composerxe-2011.2.142/bin/intel64/icc
> and
> /opt/intel/composerxe-2011.2.142/bin/intel64/icpc
> 
> So that all seems okay to me. Still when I do
> ./configure CC=icc CXX=icpc F77=ifort FC=ifort --prefix=/opt/openmpi-1.6.4
> from the folder in which the extracted OpenMPI files sit, I get
> 
> 
> == Configuring Open MPI
> 
> 
> *** Startup tests
> checking build system type... x86_64-apple-darwin11.4.2
> checking host system type... x86_64-apple-darwin11.4.2
> checking target system type... x86_64-apple-darwin11.4.2
> checking for gcc... icc
> checking whether the C compiler works... no
> configure: error: in 
> `/Users/geraldinehochman-klarenberg/Projects/openmpi-1.6.4':
> configure: error: C compiler cannot create executables
> See `config.log' for more details
> 
> I'd really appreciate any pointers on how to solve this, because I'm running 
> out of ideas on how to solve this (and so seems Google).
> 
> Thanks!
> Geraldine
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] distributed file system

2013-05-16 Thread Jeff Squyres (jsquyres)
On May 16, 2013, at 12:30 PM, Ralph Castain  wrote:

>> "... when the time comes to patch or otherwise upgrade LAM, ..."
> 
> lol - fixed. thx!

That is absolutely hilarious -- it's been there for *years*...

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] plm:tm: failed to spawn daemon, error code = 17000 Error when running jobs on 600 or more nodes

2013-05-16 Thread Gus Correa

Hi Qamar

I don't have a cluster as large as yours,
but I know Torque requires special settings for large
clusters:

http://www.clusterresources.com/torquedocs21/a.flargeclusters.shtml

My tm_.h (Torque 2.4.11) says:

#define TM_ESYSTEM  17000
#define TM_ENOEVENT  17001
#define TM_ENOTCONNECTED 17002

and TM_ESYSTEM may be sent back by pbs_mom (see mom_comm.c)
if it cannot start the user process.

Have you tried to launch a simple "hostname" command with pbsdsh
on >600 nodes?

Diskless/stateless nodes, if you have them, may present another
challenge (say, regarding /tmp):
http://www.supercluster.org/pipermail/torqueusers/2011-March/012453.html
http://www.open-mpi.org/faq/?category=all#poor-sm-btl-performance
http://www.open-mpi.org/faq/?category=all#network-vs-local

I hope this helps,
Gus Correa

On 05/16/2013 12:21 PM, Ralph Castain wrote:

Check the torque error constants - i'm not sure what that value means,
but torque is reporting the error. all we do is print out the value they
return if it is an error


On May 16, 2013, at 9:09 AM, Qamar Nazir > wrote:


Dear Support,

We are having an issue with our OMPI runs. When we run jobs on <=550
machines (550 x 16 cores) then they work without any problem. As soon
as we run them on 600 or more machines we get the "plm:tm: failed to
spawn daemon, error code = 17000" Error

We are using:

OpenMPI ver: 1.6.4 (Compiled with GCC v4.4.6)
Torque ver: 2.5.12

The ompi_info's output is attached.


The Environmentstats have been pasted below.


Please assist.


env envsubst
[ocfacc@cyan01 fullrun]$ env
MODULE_VERSION_STACK=3.2.10
OMPI_MCA_mtl=^psm
MANPATH=/local/software/openmpi/1.6.4/gcc/share/man:/local/software/moab/6.1.10/man:/usr/local/share/man:/usr/share/man/overrides:/usr/share/man:/local/Modules/default/share/man
HOSTNAME=cyan01
SHELL=/bin/bash
TERM=xterm
HISTSIZE=1000
QTDIR=/usr/lib64/qt-3.3
OLDPWD=/home/ocfacc/hpl/fullrun/results
QTINC=/usr/lib64/qt-3.3/include
LC_ALL=POSIX
USER=ocfacc
LD_LIBRARY_PATH=/local/software/openmpi/1.6.4/gcc/lib:/local/software/torque/default/lib
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;!

35:*.xcf=0
1;3

5:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
MPIROOT=/local/software/openmpi/1.6.4/gcc
MODULE_VERSION=3.2.10
MAIL=/var/spool/mail/ocfacc
PATH=/local/software/openmpi/1.6.4/gcc/bin:/usr/lib64/qt-3.3/bin:/local/software/moab/6.1.10/sbin:/local/software/moab/6.1.10/bin:/local/software/torque/default/sbin:/local/software/torque/default/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/lpp/mmfs/bin:/home/ocfacc/bin:/local/bin:.
PWD=/home/ocfacc/hpl/fullrun
_LMFILES_=/local/Modules/3.2.10/modulefiles/schedulers/torque/2.5.12:/local/Modules/3.2.10/modulefiles/schedulers/moab/6.1.10:/local/Modules/3.2.10/modulefiles/misc/null:/local/Modules/3.2.10/modulefiles/mpi/openmpi/1.6.4/gcc
LANG=en_US.UTF-8
KDE_IS_PRELINKED=1
MOABHOMEDIR=/local/moab/6.1.10
MODULEPATH=/local/Modules/versions:/local/Modules/modulefiles:/local/Modules/3.2.10/modulefiles/misc:/local/Modules/3.2.10/modulefiles/mpi:/local/Modules/3.2.10/modulefiles/libs:/local/Modules/3.2.10/modulefiles/compilers:/local/Modules/3.2.10/modulefiles/apps:/local/Modules/3.2.10/modulefiles/schedulers
LOADEDMODULES=torque/2.5.12:moab/6.1.10:null:openmpi/1.6.4/gcc
KDEDIRS=/usr
PBS_SERVER=blue101,blue102
SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
HISTCONTROL=ignoredups
SHLVL=1
HOME=/home/ocfacc
LOGNAME=ocfacc
QTLIB=/usr/lib64/qt-3.3/lib
CVS_RSH=ssh
LC_CTYPE=POSIX
MODULESHOME=/local/Modules/3.2.10
LESSOPEN=|/usr/bin/lesspipe.sh %s
G_BROKEN_FILENAMES=1
module=() { eval `/local/Modules/$MODULE_VERSION/bin/modulecmd bash $*`
}
_=/bin/env









--

Best Regards,

*Qamar Nazir*

HPC Software Engineer

OCF plc


*Tel:*0114 257 2200 Twitter 


Re: [OMPI users] distributed file system

2013-05-16 Thread Ralph Castain

On May 16, 2013, at 9:18 AM, Gus Correa  wrote:

> With the minor caveat that a sentence in the link below
> still points to o'l LAM.  :)
> 
> "... when the time comes to patch or otherwise upgrade LAM, ..."

lol - fixed. thx!

> 
> If you have a NFS shared file system,
> if the architecture and OS are the same across the nodes,
> if your cluster isn't too big (for NFS latency to have an impact),
> it is much easier to install Open MPI on the NFS share
> than to install it on all nodes.
> One installation only to take care of.
> Specially if you need different Open MPI builds for different
> compilers, etc.
> If you don't have NFS, it is worth to install
> it beforehand and to use it.
> 
> I hope it helps,
> Gus Correa
> 
> On 05/16/2013 11:38 AM, Jeff Squyres (jsquyres) wrote:
>> See http://www.open-mpi.org/faq/?category=building#where-to-install
>> 
>> 
>> On May 16, 2013, at 11:30 AM, Ralph Castain
>>  wrote:
>> 
>>> no, as long as ompi is installed in same location on each machine
>>> 
>>> On May 16, 2013, at 8:24 AM, Reza Bakhshayeshi  wrote:
>>> 
 Hi
 
 Do we need distributed file system (like NFS) when running MPI program on 
 multiple machines?
 
 thanks,
 Reza
 ___
 users mailing list
 us...@open-mpi.org
 http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> 
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] plm:tm: failed to spawn daemon, error code = 17000 Error when running jobs on 600 or more nodes

2013-05-16 Thread Ralph Castain
Check the torque error constants - i'm not sure what that value means, but 
torque is reporting the error. all we do is print out the value they return if 
it is an error


On May 16, 2013, at 9:09 AM, Qamar Nazir  wrote:

> Dear Support,
> 
> We are having an issue with our OMPI runs. When we run jobs on <=550 machines 
> (550 x 16 cores) then they work without any problem. As soon as we run them 
> on 600 or more machines we get the "plm:tm: failed to spawn daemon, error 
> code = 17000" Error
> 
> We are using:
> 
> OpenMPI ver: 1.6.4 (Compiled with GCC v4.4.6) 
> Torque ver: 2.5.12 
> 
> The ompi_info's output is attached.
> 
> 
> The Environment stats have been pasted below.
> 
> 
> Please assist.
> 
> 
> env   envsubst  
> [ocfacc@cyan01 fullrun]$ env
> MODULE_VERSION_STACK=3.2.10
> OMPI_MCA_mtl=^psm
> MANPATH=/local/software/openmpi/1.6.4/gcc/share/man:/local/software/moab/6.1.10/man:/usr/local/share/man:/usr/share/man/overrides:/usr/share/man:/local/Modules/default/share/man
> HOSTNAME=cyan01
> SHELL=/bin/bash
> TERM=xterm
> HISTSIZE=1000
> QTDIR=/usr/lib64/qt-3.3
> OLDPWD=/home/ocfacc/hpl/fullrun/results
> QTINC=/usr/lib64/qt-3.3/include
> LC_ALL=POSIX
> USER=ocfacc
> LD_LIBRARY_PATH=/local/software/openmpi/1.6.4/gcc/lib:/local/software/torque/default/lib
> LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;3
>  
> 5:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
> MPIROOT=/local/software/openmpi/1.6.4/gcc
> MODULE_VERSION=3.2.10
> MAIL=/var/spool/mail/ocfacc
> PATH=/local/software/openmpi/1.6.4/gcc/bin:/usr/lib64/qt-3.3/bin:/local/software/moab/6.1.10/sbin:/local/software/moab/6.1.10/bin:/local/software/torque/default/sbin:/local/software/torque/default/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/lpp/mmfs/bin:/home/ocfacc/bin:/local/bin:.
> PWD=/home/ocfacc/hpl/fullrun
> _LMFILES_=/local/Modules/3.2.10/modulefiles/schedulers/torque/2.5.12:/local/Modules/3.2.10/modulefiles/schedulers/moab/6.1.10:/local/Modules/3.2.10/modulefiles/misc/null:/local/Modules/3.2.10/modulefiles/mpi/openmpi/1.6.4/gcc
> LANG=en_US.UTF-8
> KDE_IS_PRELINKED=1
> MOABHOMEDIR=/local/moab/6.1.10
> MODULEPATH=/local/Modules/versions:/local/Modules/modulefiles:/local/Modules/3.2.10/modulefiles/misc:/local/Modules/3.2.10/modulefiles/mpi:/local/Modules/3.2.10/modulefiles/libs:/local/Modules/3.2.10/modulefiles/compilers:/local/Modules/3.2.10/modulefiles/apps:/local/Modules/3.2.10/modulefiles/schedulers
> LOADEDMODULES=torque/2.5.12:moab/6.1.10:null:openmpi/1.6.4/gcc
> KDEDIRS=/usr
> PBS_SERVER=blue101,blue102
> SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
> HISTCONTROL=ignoredups
> SHLVL=1
> HOME=/home/ocfacc
> LOGNAME=ocfacc
> QTLIB=/usr/lib64/qt-3.3/lib
> CVS_RSH=ssh
> LC_CTYPE=POSIX
> MODULESHOME=/local/Modules/3.2.10
> LESSOPEN=|/usr/bin/lesspipe.sh %s
> G_BROKEN_FILENAMES=1
> module=() {  eval `/local/Modules/$MODULE_VERSION/bin/modulecmd bash $*`
> }
> _=/bin/env
> 
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> 
> Best Regards, 
> Qamar Nazir
> 
> HPC Software Engineer
> 
> OCF plc
> 
>  
> Tel: 0114 257 2200Twitter
> 
> Fax: 0114 257 0022   Blog
> 
> Mob: 07508 033895  Web
> 
>  
> OCF plc is a company registered in England and Wales.  Registered number 
> 4132533. Registered office address: OCF plc, 5 Rotunda Business Centre, 
> Thorncliffe Park, Chapeltown, Sheffield, S35 2PG
> 
>  
> Please note, any emails relating to an OCF Support request must always be 
> sent to supp...@ocf.co.uk for a ticket number to be generated or existing 
> support ticket to be updated. Should this not be done then OCF cannot be held 
> responsible for requests not dealt with in a timely manner.
> 
>  
> This message is private and confidential. If you have received this message 
> in error, please notify us immediately and remove it 

Re: [OMPI users] distributed file system

2013-05-16 Thread Gus Correa

With the minor caveat that a sentence in the link below
still points to o'l LAM.  :)

"... when the time comes to patch or otherwise upgrade LAM, ..."

If you have a NFS shared file system,
if the architecture and OS are the same across the nodes,
if your cluster isn't too big (for NFS latency to have an impact),
it is much easier to install Open MPI on the NFS share
than to install it on all nodes.
One installation only to take care of.
Specially if you need different Open MPI builds for different
compilers, etc.
If you don't have NFS, it is worth to install
it beforehand and to use it.

I hope it helps,
Gus Correa

On 05/16/2013 11:38 AM, Jeff Squyres (jsquyres) wrote:

See http://www.open-mpi.org/faq/?category=building#where-to-install


On May 16, 2013, at 11:30 AM, Ralph Castain
  wrote:


no, as long as ompi is installed in same location on each machine

On May 16, 2013, at 8:24 AM, Reza Bakhshayeshi  wrote:


Hi

Do we need distributed file system (like NFS) when running MPI program on 
multiple machines?

thanks,
Reza
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users







[OMPI users] plm:tm: failed to spawn daemon, error code = 17000 Error when running jobs on 600 or more nodes

2013-05-16 Thread Qamar Nazir

Dear Support,

We are having an issue with our OMPI runs. When we run jobs on <=550 
machines (550 x 16 cores) then they work without any problem. As soon as 
we run them on 600 or more machines we get the "plm:tm: failed to spawn 
daemon, error code = 17000" Error


We are using:

OpenMPI ver: 1.6.4 (Compiled with GCC v4.4.6)
Torque ver: 2.5.12

The ompi_info's output is attached.


The Environmentstats have been pasted below.


Please assist.


env   envsubst
[ocfacc@cyan01 fullrun]$ env
MODULE_VERSION_STACK=3.2.10
OMPI_MCA_mtl=^psm
MANPATH=/local/software/openmpi/1.6.4/gcc/share/man:/local/software/moab/6.1.10/man:/usr/local/share/man:/usr/share/man/overrides:/usr/share/man:/local/Modules/default/share/man
HOSTNAME=cyan01
SHELL=/bin/bash
TERM=xterm
HISTSIZE=1000
QTDIR=/usr/lib64/qt-3.3
OLDPWD=/home/ocfacc/hpl/fullrun/results
QTINC=/usr/lib64/qt-3.3/include
LC_ALL=POSIX
USER=ocfacc
LD_LIBRARY_PATH=/local/software/openmpi/1.6.4/gcc/lib:/local/software/torque/default/lib
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
MPIROOT=/local/software/openmpi/1.6.4/gcc
MODULE_VERSION=3.2.10
MAIL=/var/spool/mail/ocfacc
PATH=/local/software/openmpi/1.6.4/gcc/bin:/usr/lib64/qt-3.3/bin:/local/software/moab/6.1.10/sbin:/local/software/moab/6.1.10/bin:/local/software/torque/default/sbin:/local/software/torque/default/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/lpp/mmfs/bin:/home/ocfacc/bin:/local/bin:.
PWD=/home/ocfacc/hpl/fullrun
_LMFILES_=/local/Modules/3.2.10/modulefiles/schedulers/torque/2.5.12:/local/Modules/3.2.10/modulefiles/schedulers/moab/6.1.10:/local/Modules/3.2.10/modulefiles/misc/null:/local/Modules/3.2.10/modulefiles/mpi/openmpi/1.6.4/gcc
LANG=en_US.UTF-8
KDE_IS_PRELINKED=1
MOABHOMEDIR=/local/moab/6.1.10
MODULEPATH=/local/Modules/versions:/local/Modules/modulefiles:/local/Modules/3.2.10/modulefiles/misc:/local/Modules/3.2.10/modulefiles/mpi:/local/Modules/3.2.10/modulefiles/libs:/local/Modules/3.2.10/modulefiles/compilers:/local/Modules/3.2.10/modulefiles/apps:/local/Modules/3.2.10/modulefiles/schedulers
LOADEDMODULES=torque/2.5.12:moab/6.1.10:null:openmpi/1.6.4/gcc
KDEDIRS=/usr
PBS_SERVER=blue101,blue102
SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
HISTCONTROL=ignoredups
SHLVL=1
HOME=/home/ocfacc
LOGNAME=ocfacc
QTLIB=/usr/lib64/qt-3.3/lib
CVS_RSH=ssh
LC_CTYPE=POSIX
MODULESHOME=/local/Modules/3.2.10
LESSOPEN=|/usr/bin/lesspipe.sh %s
G_BROKEN_FILENAMES=1
module=() {  eval `/local/Modules/$MODULE_VERSION/bin/modulecmd bash $*`
}
_=/bin/env









--
Qamar Nazir

Best Regards,

*Qamar Nazir*

HPC Software Engineer

OCF plc

*Tel:*0114 257 2200 Twitter 

*Fax:*0114 257 0022 Blog 

*Mob:*07508 033895 Web 

OCF plc is a company registered in England and Wales.  Registered number 
4132533. Registered office address: OCF plc, 5 Rotunda Business Centre, 
Thorncliffe Park, Chapeltown, Sheffield, S35 2PG


Please note, any emails relating to an OCF Support request must always 
be sent to supp...@ocf.co.uk for a ticket 
number to be generated or existing support ticket to be updated. Should 
this not be done then OCF cannot be held responsible for requests not 
dealt with in a timely manner.


This message is private and confidential. If you have received this 
message in error, please notify us immediately and remove it from your 
system.




ompi_info.txt.bz2
Description: application/bzip


[OMPI users] Configuration with Intel C++ Composer 12.0.2 on OSX 10.7.5

2013-05-16 Thread Geraldine Hochman-Klarenberg

I am having trouble configuring OpenMPI-1.6.4 with the Intel C/C++ composer 
(12.0.2). My OS is OSX 10.7.5.

I am not a computer whizz so I hope I can explain what I did properly:

1) In bash, I did source /opt/intel/bin/compilervars.sh intel64 
and then echo PATH showed: 
/opt/intel/composerxe-2011.2.142/bin/intel64:/opt/intel/composerxe-2011.2.142/mpirt/bin/intel64:/opt/intel/composerxe-2011.2.142/bin:/Library/Frameworks/EPD64.framework/Versions/Current/bin:/Library/Frameworks/Python.framework/Versions/Current/bin:.:/Library/Frameworks/EPD64.framework/Versions/Current/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin

2) which icc and which icpc showed:
/opt/intel/composerxe-2011.2.142/bin/intel64/icc
and
/opt/intel/composerxe-2011.2.142/bin/intel64/icpc

So that all seems okay to me. Still when I do
./configure CC=icc CXX=icpc F77=ifort FC=ifort --prefix=/opt/openmpi-1.6.4
from the folder in which the extracted OpenMPI files sit, I get


== Configuring Open MPI


*** Startup tests
checking build system type... x86_64-apple-darwin11.4.2
checking host system type... x86_64-apple-darwin11.4.2
checking target system type... x86_64-apple-darwin11.4.2
checking for gcc... icc
checking whether the C compiler works... no
configure: error: in 
`/Users/geraldinehochman-klarenberg/Projects/openmpi-1.6.4':
configure: error: C compiler cannot create executables
See `config.log' for more details

I'd really appreciate any pointers on how to solve this, because I'm running 
out of ideas on how to solve this (and so seems Google).

Thanks!
Geraldine

Re: [OMPI users] distributed file system

2013-05-16 Thread Jeff Squyres (jsquyres)
See http://www.open-mpi.org/faq/?category=building#where-to-install


On May 16, 2013, at 11:30 AM, Ralph Castain 
 wrote:

> no, as long as ompi is installed in same location on each machine
> 
> On May 16, 2013, at 8:24 AM, Reza Bakhshayeshi  wrote:
> 
>> Hi
>> 
>> Do we need distributed file system (like NFS) when running MPI program on 
>> multiple machines?
>> 
>> thanks,
>> Reza
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] distributed file system

2013-05-16 Thread Ralph Castain
no, as long as ompi is installed in same location on each machine

On May 16, 2013, at 8:24 AM, Reza Bakhshayeshi  wrote:

> Hi
> 
> Do we need distributed file system (like NFS) when running MPI program on 
> multiple machines?
> 
> thanks,
> Reza
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



[OMPI users] distributed file system

2013-05-16 Thread Reza Bakhshayeshi
Hi

Do we need distributed file system (like NFS) when running MPI program on
multiple machines?

thanks,
Reza


Re: [OMPI users] MPI_SUM is not defined on the MPI_INTEGER datatype

2013-05-16 Thread Hayato KUNIIE

Following is result of mpirun ompi_info on three_nodes.

three nodes version is same.

Package: Open MPI root@bwhead.clnet Distribution  Open MPI root@bwslv01 
Distribution  Open MPI root@bwslv02 Distribution

Open MPI: 1.6.4  1.6.4  1.6.4
Open MPI SVN revision: r28081  r28081  r28081
Open MPI release date: Feb 19, 2013  Feb 19, 2013  Feb 19, 2013
Open RTE: 1.6.4  1.6.4  1.6.4
Open RTE SVN revision: r28081  r28081  r28081
Open RTE release date: Feb 19, 2013  Feb 19, 2013  Feb 19, 2013
OPAL: 1.6.4  1.6.4  1.6.4
OPAL SVN revision: r28081  r28081  r28081
OPAL release date: Feb 19, 2013  Feb 19, 2013  Feb 19, 2013
MPI API: 2.1  2.1  2.1
Ident string: 1.6.4  1.6.4  1.6.4
Prefix: /usr/local  /usr/local  /usr/local
Configured architecture: x86_64-unknown-linux-gnu 
x86_64-unknown-linux-gnu  x86_64-unknown-linux-gnu

Configure host: bwhead.clnet  bwslv01  bwslv02
Configured by: root  root  root
Configured on: Wed May  8 20:38:14 JST 2013 45 JST 2013 29 JST 2013
Configure host: bwhead.clnet  bwslv01  bwslv02
Built by: root  root  root
Built on: Wed May  8 20:48:44 JST 2013 43 JST 2013 38 JST 2013
Built host: bwhead.clnet  bwslv01  bwslv02
C bindings: yes  yes  yes
C++ bindings: yes  yes  yes
Fortran77 bindings: yes (all)  yes (all)  yes (all)
Fortran90 bindings: yes  yes  yes
Fortran90 bindings size: small  small  small
C compiler: gcc  gcc  gcc
C compiler absolute: /usr/bin/gcc  /usr/bin/gcc  /usr/bin/gcc
C compiler family name: GNU  GNU  GNU
C compiler version: 4.4.7  4.4.7  4.4.7
C++ compiler: g++  g++  g++
C++ compiler absolute: /usr/bin/g++  /usr/bin/g++  /usr/bin/g++
Fortran77 compiler: gfortran  gfortran  gfortran
Fortran77 compiler abs: /usr/bin/gfortran  /usr/bin/gfortran 
/usr/bin/gfortran

Fortran90 compiler: gfortran  gfortran  gfortran
Fortran90 compiler abs: /usr/bin/gfortran  /usr/bin/gfortran 
/usr/bin/gfortran

C profiling: yes  yes  yes
C++ profiling: yes  yes  yes
Fortran77 profiling: yes  yes  yes
Fortran90 profiling: yes  yes  yes
C++ exceptions: no  no  no
Thread support: posix (MPI_THREAD_MULTIPLE: no, progress: no) no)  no)
Sparse Groups: no  no  no
Internal debug support: no  no  no
MPI interface warnings: no  no  no
MPI parameter check: runtime  runtime  runtime
Memory profiling support: no  no  no
Memory debugging support: no  no  no
libltdl support: yes  yes  yes
Heterogeneous support: no  no  no
mpirun default --prefix: no  no  no
MPI I/O support: yes  yes  yes
MPI_WTIME support: gettimeofday  gettimeofday  gettimeofday
Symbol vis. support: yes  yes  yes
Host topology support: yes  yes  yes
MPI extensions: affinity example  affinity example  affinity example
FT Checkpoint support: no (checkpoint thread: no)  no)  no)
VampirTrace support: yes  yes  yes
MPI_MAX_PROCESSOR_NAME: 256  256  256
MPI_MAX_ERROR_STRING: 256  256  256
MPI_MAX_OBJECT_NAME: 64  64  64
MPI_MAX_INFO_KEY: 36  36  36
MPI_MAX_INFO_VAL: 256  256  256
MPI_MAX_PORT_NAME: 1024  1024  1024
MPI_MAX_DATAREP_STRING: 128  128  128
Package: Open MPI root@bwslv01 Distribution  execinfo (MCA v2.0, API 
v2.0, Component v1.6.4)  execinfo (MCA v2.0, API v2.0, Component v1.6.4)
Open MPI: 1.6.4  linux (MCA v2.0, API v2.0, Component v1.6.4) linux (MCA 
v2.0, API v2.0, Component v1.6.4)
Open MPI SVN revision: r28081  hwloc (MCA v2.0, API v2.0, Component 
v1.6.4)  hwloc (MCA v2.0, API v2.0, Component v1.6.4)
Open MPI release date: Feb 19, 2013  auto_detect (MCA v2.0, API v2.0, 
Component v1.6.4)  auto_detect (MCA v2.0, API v2.0, Component v1.6.4)
Open RTE: 1.6.4  file (MCA v2.0, API v2.0, Component v1.6.4)  file (MCA 
v2.0, API v2.0, Component v1.6.4)
Open RTE SVN revision: r28081  mmap (MCA v2.0, API v2.0, Component 
v1.6.4)  mmap (MCA v2.0, API v2.0, Component v1.6.4)
Open RTE release date: Feb 19, 2013  posix (MCA v2.0, API v2.0, 
Component v1.6.4)  posix (MCA v2.0, API v2.0, Component v1.6.4)
OPAL: 1.6.4  sysv (MCA v2.0, API v2.0, Component v1.6.4)  sysv (MCA 
v2.0, API v2.0, Component v1.6.4)
OPAL SVN revision: r28081  first_use (MCA v2.0, API v2.0, Component 
v1.6.4)  first_use (MCA v2.0, API v2.0, Component v1.6.4)
OPAL release date: Feb 19, 2013  hwloc (MCA v2.0, API v2.0, Component 
v1.6.4)  hwloc (MCA v2.0, API v2.0, Component v1.6.4)
MPI API: 2.1  linux (MCA v2.0, API v2.0, Component v1.6.4)  linux (MCA 
v2.0, API v2.0, Component v1.6.4)
Ident string: 1.6.4  env (MCA v2.0, API v2.0, Component v1.6.4) env (MCA 
v2.0, API v2.0, Component v1.6.4)
Prefix: /usr/local  config (MCA v2.0, API v2.0, Component v1.6.4) config 
(MCA v2.0, API v2.0, Component v1.6.4)
Configured architecture: x86_64-unknown-linux-gnu  linux (MCA v2.0, API 
v2.0, Component v1.6.4)  linux (MCA v2.0, API v2.0, Component v1.6.4)
Configure host: bwslv01  hwloc132 (MCA v2.0, API v2.0, Component 
v1.6.4)  hwloc132 (MCA v2.0, API v2.0, Component v1.6.4)
Configured by: root  orte (MCA v2.0, API v2.0, Component v1.6.4) orte 
(MCA v2.0, API v2.0, Component v1.6.4)
Configured on: Wed May  8 20:56:45 JST 2013  orte (MCA v2.0, API v2.0, 
Component v1.6.4)  orte (MCA v2.0, 

Re: [OMPI users] Subject: Building openmpi-1.6.4 with 64-bit integers

2013-05-16 Thread Jeff Squyres (jsquyres)
Looking at your config.log, it looks like OMPI correctly determined that the 
Fortran INTEGER size is 8.  I see statements like this:

#define OMPI_SIZEOF_FORTRAN_INTEGER 8

Are you sure that you're running the ompi_info that you just installed?  Can 
you double check to see that there's not another ompi_info somewhere in your 
path that you're accidentally executing?


On May 15, 2013, at 9:58 PM, H Hogreve 
 wrote:

> Dear Jeff:
> 
> Attached please find the compressed  Config.log  file;
> perhaps it already might provide some indications for
> the problem encountered. There are several entries
> "compilation aborted for conftest.c", but I don't know
> whether this is of importance.
> Many thanks and best wishes, Hans H.
> 
> - Original Message -
> From: "Jeff Squyres (jsquyres)" 
> To: "Open MPI Users" 
> Sent: Thursday, May 16, 2013 2:49 AM
> Subject: Re: [OMPI users] Subject: Building openmpi-1.6.4 with 64-bit
> integers
> 
> 
>> Can you send all the information listed here:
>> 
>>http://www.open-mpi.org/community/help/
>> 
>> 
>> On May 15, 2013, at 7:42 PM, H Hogreve 
>> wrote:
>> 
>>> 
>>> Dear mpi team / users:
>>> 
>>> To get a mpi with 64-bit integers (linux system:
>>> ubuntu 12.04) I invoked the following
>>> configuration options:
>>> 
>>> ./configure --prefix=/opt/openmpi CXX=icpc CC=icc F77=ifort FC=ifort
>>> FFLAGS=-i8 FCFLAGS=-i8
>>> 
>>> The subsequent make/install scripts apparently
>>> went through smoothly, but when I check
>>> 
>>> ompi_info -a | grep 'Fort integer size'
>>> 
>>> the result reads:
>>> 
>>> Fort integer size: 4
>>> 
>>> What went awry?
>>> For all hints and suggestions many thanks in advance,
>>> Hans H.
>>> 
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> 
>> --
>> Jeff Squyres
>> jsquy...@cisco.com
>> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>> 
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/