Re: [OMPI users] assert in opal_datatype_is_contiguous_memory_layout

2013-04-05 Thread Eric Chamberland

Hi again,

I have attached a very small example which raise the assertion.

The problem is arising from a process which does not have any element to 
write in the file (and then in the MPI_File_set_view)...


You can see this "bug" with openmpi 1.6.3, 1.6.4 and 1.7.0 configured with:

./configure --enable-mem-debug --enable-mem-profile --enable-memchecker
 --with-mpi-param-check --enable-debug

Just compile the given example (idx_null.cc) as-is with

mpicxx -o idx_null idx_null.cc

and run with 3 processes:

mpirun -n 3 idx_null

You can modify the example by commenting "#define WITH_ZERO_ELEMNT_BUG" 
to see that everything is going well when all processes have something 
to write.


There is no "bug" if you use openmpi 1.6.3 (and higher) without the 
debugging options.


Also, all is working well with mpich-3.0.3 configured with:

./configure --enable-g=yes


So, is this a wrong "assert" in openmpi?

Is there a real problem to use this code in a "release" mode?

Thanks,

Eric

On 04/05/2013 12:57 PM, Eric Chamberland wrote:

Hi all,

I have a well working (large) code which is using openmpi 1.6.3 (see
config.log here:
http://www.giref.ulaval.ca/~ericc/bug_openmpi/config.log_nodebug)

(I have used it for reading with MPI I/O with success over 1500 procs
with very large files)

However, when I use openmpi compiled with "debug" options:

./configure --enable-mem-debug --enable-mem-profile --enable-memchecker
--with-mpi-param-check --enable-debug --prefix=/opt/openmpi-1.6.3_debug
(se other config.log here:
http://www.giref.ulaval.ca/~ericc/bug_openmpi/config.log_debug) the code
is aborting with an assertion on a very small example on 2 processors.
(the same very small example is working well without the debug mode)

Here is the assertion causing an abort:

===

openmpi-1.6.3/opal/datatype/opal_datatype.h:

static inline int32_t
opal_datatype_is_contiguous_memory_layout( const opal_datatype_t*
datatype, int32_t count )
{
 if( !(datatype->flags & OPAL_DATATYPE_FLAG_CONTIGUOUS) ) return 0;
 if( (count == 1) || (datatype->flags & OPAL_DATATYPE_FLAG_NO_GAPS)
) return 1;


/* This is the assertion:  */

 assert( (OPAL_PTRDIFF_TYPE)datatype->size != (datatype->ub -
datatype->lb) );

 return 0;
}

===

Does anyone can tell me what does this mean?

It happens while writing a file with MPI I/O when I am calling for the
fourth time a "MPI_File_set_view"... with different types of
MPI_Datatype created with "MPI_Type_indexed".

I am trying to reproduce the bug with a very small example to be send
here, but if anyone has a hint to give me...
(I would like: this assert is not good! just ignore it ;-) )

Thanks,

Eric
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


#include "mpi.h"
#include 
#include 

using namespace std;

void abortOnError(int ierr) {
  if (ierr != MPI_SUCCESS) {
printf("ERROR Returned by MPI: %d\n",ierr);
char* lCharPtr = new char[MPI_MAX_ERROR_STRING];
int lLongueur = 0;
MPI_Error_string(ierr,lCharPtr, &lLongueur);
printf("ERROR_string Returned by MPI: %s\n",lCharPtr);
MPI_Abort( MPI_COMM_WORLD, 1 );
  }
}
// This main is showing how to have an assertion raised if you try
// to create a MPI_File_set_view with some process holding no data

#define WITH_ZERO_ELEMNT_BUG

int main(int argc, char *argv[])
{
  int rank, size, i;
  MPI_Datatype lTypeIndexIntWithExtent, lTypeIndexIntWithoutExtent;

  MPI_Init(&argc, &argv);
  MPI_Comm_size(MPI_COMM_WORLD, &size);
  if (size != 3)
  {
printf("Please run with 3 processes.\n");
MPI_Finalize();
return 1;
  }
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);

  int displacement[3];
  int* buffer = 0;

  int lTailleBuf = 0;
  if (rank == 0)
  {
lTailleBuf = 3;
displacement[0] = 0;
displacement[1] = 4;
displacement[2] = 5;
buffer = new int[lTailleBuf];
for (i=0; i("temp"), 
MPI_MODE_RDWR | MPI_MODE_CREATE, MPI_INFO_NULL, &lFile ));

  MPI_Type_create_indexed_block(lTailleBuf, 1, displacement, MPI_INT, 
&lTypeIndexIntWithoutExtent);
  MPI_Type_commit(&lTypeIndexIntWithoutExtent);

  // Here we compute the total number of int to write to resize the type:
  // Ici, on veut s'échanger le nb total de int à écrire à chaque appel pcqu'on 
doit calculer le bon "extent" du type.
  // Ça revient à dire que chaque processus ne n'écrira qu'une petite partie du 
fichier, mais devra avancer son pointeur
  // local d'écriture suffisamment loin pour ne pas écrire par dessus les 
données des autres
  int lTailleGlobale = 0;
  printf("[%d] Local size : %d \n",rank,lTailleBuf);

  MPI_Allreduce( &lTailleBuf, &lTailleGlobale, 1, MPI_INT, MPI_SUM, 
MPI_COMM_WORLD );

  printf("[%d] MPI_AllReduce : %d \n",rank,lTailleGlobale);

  //We now modify the extent of the type "type_without_extent"
  MPI_Type_create_resized( lTypeIndexIntWithoutExtent, 0, 
lTailleGlobale*sizeof(int

Re: [OMPI users] cannot build openmpi-1.9r28290 on Linux/Solaris

2013-04-05 Thread Ralph Castain

On Apr 5, 2013, at 12:33 AM, Siegmar Gross 
 wrote:

> Hi
> 
> today I tried to install openmpi-1.9r28290 and got the following errors.
> 
> Solaris 10, Sparc, Sun C 5.12, 32-bit version of openmpi
> Solaris 10, x86_64, Sun C 5.12, 32-bit version of openmpi
> Solaris 10, Sparc, Sun C 5.12, 64-bit version of openmpi
> Solaris 10, x86_64, Sun C 5.12, 64-bit version of openmpi
> -
> 
> ...
>  CC   topology-solaris.lo
> "../../../../../../../openmpi-1.9r28290/opal/mca/hwloc/hwloc152/hwloc/src/topolo
> gy-solaris.c", line 226: undefined symbol: binding
> "../../../../../../../openmpi-1.9r28290/opal/mca/hwloc/hwloc152/hwloc/src/topolo
> gy-solaris.c", line 227: undefined symbol: hwloc_set
> "../../../../../../../openmpi-1.9r28290/opal/mca/hwloc/hwloc152/hwloc/src/topolo
> gy-solaris.c", line 227: warning: improper pointer/integer combination: arg #1
> cc: acomp failed for 
> ../../../../../../../openmpi-1.9r28290/opal/mca/hwloc/hwloc
> 152/hwloc/src/topology-solaris.c
> make[4]: *** [topology-solaris.lo] Error 1
> ...
> 

Found a missing variable declaration - try with r28293 or above.

> 
> 
> 
> openSuSE Linux 12.1, x86_64, Sun C 5.12, 32-bit version of openmpi
> openSuSE Linux 12.1, x86_64, Sun C 5.12, 64-bit version of openmpi
> --
> 
> ...
>  PPFC mpi-f08-sizeof.lo
>  PPFC mpi-f08.lo
> "../../../../../openmpi-1.9r28290/ompi/mpi/fortran/use-mpi-f08/mpi-f08.F90", 
> Lin
> e = 1, Column = 1: INTERNAL: Interrupt: Segmentation fault
> make[2]: *** [mpi-f08.lo] Error 1
> make[2]: Leaving directory 
> `/export2/src/openmpi-1.9/openmpi-1.9-Linux.x86_64.32
> _cc/ompi/mpi/fortran/use-mpi-f08'
> make[1]: *** [all-recursive] Error 1
> ...
> 

I have to defer the Fortran stuff to Jeff.


> 
> I could built an older version.
> 
> Package: Open MPI root@linpc1 Distribution
>Open MPI: 1.9r28209
>  Open MPI repo revision: r28209
>   Open MPI release date: Mar 25, 2013 (nightly snapshot tarball)
>Open RTE: 1.9
>  Open RTE repo revision: r28134M
>   Open RTE release date: Feb 28, 2013
>OPAL: 1.9
>  OPAL repo revision: r28134M
>   OPAL release date: Feb 28, 2013
> MPI API: 2.1
>Ident string: 1.9r28209
>  Prefix: /usr/local/ompi-java_64_cc
> Configured architecture: x86_64-unknown-linux-gnu
>  Configure host: linpc1
>   Configured by: root
>   Configured on: Tue Mar 26 15:54:59 CET 2013
>  Configure host: linpc1
>Built by: root
>Built on: Tue Mar 26 16:31:01 CET 2013
>  Built host: linpc1
>  C bindings: yes
>C++ bindings: yes
> Fort mpif.h: yes (all)
>Fort use mpi: yes (full: ignore TKR)
>   Fort use mpi size: deprecated-ompi-info-value
>Fort use mpi_f08: yes
> Fort mpi_f08 compliance: The mpi_f08 module is available, but due to 
> limitations in the f95 compiler, does not support the following: array 
> subsections, ABSTRACT INTERFACE function pointers, Fortran '08-specified 
> ASYNCHRONOUS behavior, PROCEDUREs, direct passthru (where possible) to 
> underlying Open MPI's C functionality
>  Fort mpi_f08 subarrays: no
>   Java bindings: yes
>  C compiler: cc
> C compiler absolute: /opt/solstudio12.3/bin/cc
>  C compiler family name: SUN
>  C compiler version: 0x5120
>C++ compiler: CC
>   C++ compiler absolute: /opt/solstudio12.3/bin/CC
>   Fort compiler: f95
>   Fort compiler abs: /opt/solstudio12.3/bin/f95
> Fort ignore TKR: yes (!$PRAGMA IGNORE_TKR)
>   Fort 08 assumed shape: no
>  Fort optional args: yes
>Fort BIND(C): yes
>Fort PRIVATE: yes
>   Fort ABSTRACT: no
>   Fort ASYNCHRONOUS: no
>  Fort PROCEDURE: no
> Fort f08 using wrappers: yes
> C profiling: yes
>   C++ profiling: yes
>   Fort mpif.h profiling: yes
>  Fort use mpi profiling: yes
>   Fort use mpi_f08 prof: yes
>  C++ exceptions: yes
>  Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support: yes, 
> OMPI progress: no, ORTE progress: no, Event lib: no)
>   Sparse Groups: no
>  Internal debug support: yes
>  MPI interface warnings: yes
> MPI parameter check: runtime
> Memory profiling support: no
> Memory debugging support: no
> libltdl support: yes
>   Heterogeneous support: yes
> mpirun default --prefix: no
> MPI I/O support: yes
>   MPI_WTIME support: gettimeofday
> Symbol vis. support: yes
>   Host topology support: yes
>  MPI extensions: 
>   FT Checkpoint support: no (checkpoint thread: no)
>   C/R Enabled Debugging: no
> VampirTrace support: yes
>  MPI_MAX_PROCESSOR_NAME: 256
>MPI_MAX_ERROR_STRING: 256
> MPI_MAX_OBJECT_NAME: 64
>MPI_MAX_INFO_KEY: 36
>MPI_MAX

[OMPI users] assert in opal_datatype_is_contiguous_memory_layout

2013-04-05 Thread Eric Chamberland

Hi all,

I have a well working (large) code which is using openmpi 1.6.3 (see 
config.log here: 
http://www.giref.ulaval.ca/~ericc/bug_openmpi/config.log_nodebug)


(I have used it for reading with MPI I/O with success over 1500 procs 
with very large files)


However, when I use openmpi compiled with "debug" options:

./configure --enable-mem-debug --enable-mem-profile --enable-memchecker 
--with-mpi-param-check --enable-debug --prefix=/opt/openmpi-1.6.3_debug
(se other config.log here: 
http://www.giref.ulaval.ca/~ericc/bug_openmpi/config.log_debug) the code 
is aborting with an assertion on a very small example on 2 processors. 
(the same very small example is working well without the debug mode)


Here is the assertion causing an abort:

===

openmpi-1.6.3/opal/datatype/opal_datatype.h:

static inline int32_t
opal_datatype_is_contiguous_memory_layout( const opal_datatype_t* 
datatype, int32_t count )

{
if( !(datatype->flags & OPAL_DATATYPE_FLAG_CONTIGUOUS) ) return 0;
if( (count == 1) || (datatype->flags & OPAL_DATATYPE_FLAG_NO_GAPS) 
) return 1;



/* This is the assertion:  */

assert( (OPAL_PTRDIFF_TYPE)datatype->size != (datatype->ub - 
datatype->lb) );


return 0;
}

===

Does anyone can tell me what does this mean?

It happens while writing a file with MPI I/O when I am calling for the 
fourth time a "MPI_File_set_view"... with different types of 
MPI_Datatype created with "MPI_Type_indexed".


I am trying to reproduce the bug with a very small example to be send 
here, but if anyone has a hint to give me...

(I would like: this assert is not good! just ignore it ;-) )

Thanks,

Eric


Re: [OMPI users] cannot build 32-bit openmpi-1.7 on Linux

2013-04-05 Thread Paul Kapinos
I believe with 99%prob this is not an Open MPI issue, but an issue of the used 
fortran compiler (PPFC) itself.


You can verify this by going to the build subdir ('Entering directory...') and 
trying to find out _what command was called_. If your compiler crashes again, 
build a reproducer and send it to the compiler developer team :o)


Best
Paul Kapinos

On 04/05/13 17:56, Siegmar Gross wrote:

   PPFC mpi-f08.lo
"../../../../../openmpi-1.7/ompi/mpi/fortran/use-mpi-f08/mpi-f08.F90", Line = 1,
Column = 1: INTERNAL: Interrupt: Segmentation fault



--
Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241/80-24915



smime.p7s
Description: S/MIME Cryptographic Signature


[OMPI users] cannot build 32-bit openmpi-1.7 on Linux

2013-04-05 Thread Siegmar Gross
Hi

today I tried to install openmpi-1.7 and got the following error
on my Linux system.



openSuSE Linux 12.1, x86_64, Sun C 5.12, 32-bit version of openmpi
--

linpc1 openmpi-1.7-Linux.x86_64.32_cc 103 tail log.make.Linux.x86_64.32_cc
Making all in mpi/fortran/use-mpi-f08
make[2]: Entering directory 
`/export2/src/openmpi-1.7/openmpi-1.7-Linux.x86_64.32_cc/ompi/mpi/fortran/use-mp
i-f08'
  PPFC mpi-f08-sizeof.lo
  PPFC mpi-f08.lo
"../../../../../openmpi-1.7/ompi/mpi/fortran/use-mpi-f08/mpi-f08.F90", Line = 
1, 
Column = 1: INTERNAL: Interrupt: Segmentation fault
make[2]: *** [mpi-f08.lo] Error 1
make[2]: Leaving directory 
`/export2/src/openmpi-1.7/openmpi-1.7-Linux.x86_64.32_cc/ompi/mpi/fortran/use-mp
i-f08'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory 
`/export2/src/openmpi-1.7/openmpi-1.7-Linux.x86_64.32_cc/ompi'
make: *** [all-recursive] Error 1



linpc1 openmpi-1.7-Linux.x86_64.32_cc 104 head config.log
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Open MPI configure 1.7, which was
generated by GNU Autoconf 2.69.  Invocation command line was

  $ ../openmpi-1.7/configure --prefix=/usr/local/openmpi-1.7_32_cc 
--with-jdk-bindir=/usr/local/jdk1.7.0_07-32/bin 
--with-jdk-headers=/usr/local/jdk1.7.0_07-32/include 
JAVA_HOME=/usr/local/jdk1.7.0_07-32 LDFLAGS=-m32 CC=cc CXX=CC FC=f95 
CFLAGS=-m32 
CXXFLAGS=-m32 -library=stlport4 FCFLAGS=-m32 CPP=cpp CXXCPP=cpp CPPFLAGS= 
CXXCPPFLAGS= --enable-cxx-exceptions --enable-mpi-java --enable-heterogeneous 
--enable-opal-multi-threads --enable-mpi-thread-multiple --with-threads=posix 
--with-hwloc=internal --without-verbs --without-udapl 
--with-wrapper-cflags=-m32 
--enable-debug

## - ##
## Platform. ##



I could built an older version.


linpc1 bin 108 ./ompi_info | more
 Package: Open MPI root@linpc1 Distribution
Open MPI: 1.7rc9r28266
  Open MPI repo revision: r28266
   Open MPI release date: Mar 28, 2013 (nightly snapshot tarball)
Open RTE: 1.7rc9r28266
  Open RTE repo revision: r28266
   Open RTE release date: Mar 28, 2013 (nightly snapshot tarball)
OPAL: 1.7rc9r28266
  OPAL repo revision: r28266
   OPAL release date: Mar 28, 2013 (nightly snapshot tarball)
 MPI API: 2.1
Ident string: 1.7rc9r28266
  Prefix: /usr/local/openmpi-1.7_32_cc
 Configured architecture: x86_64-unknown-linux-gnu
  Configure host: linpc1
   Configured by: root
   Configured on: Thu Apr  4 09:18:23 CEST 2013
  Configure host: linpc1
Built by: root
Built on: Thu Apr  4 09:58:32 CEST 2013
  Built host: linpc1
  C bindings: yes
C++ bindings: yes
 Fort mpif.h: yes (all)
Fort use mpi: yes (full: ignore TKR)
   Fort use mpi size: deprecated-ompi-info-value
Fort use mpi_f08: yes
 Fort mpi_f08 compliance: The mpi_f08 module is available, but due to 
limitations in t
he f95 compiler, does not support the following: array subsections, ABSTRACT 
INTERFACE
 function pointers, Fortran '08-specified ASYNCHRONOUS behavior, PROCEDUREs, 
direct pa
ssthru (where possible) to underlying Open MPI's C functionality
  Fort mpi_f08 subarrays: no
  C compiler: cc
 C compiler absolute: /opt/solstudio12.3/bin/cc
  C compiler family name: SUN
  C compiler version: 0x5120
C++ compiler: CC
   C++ compiler absolute: /opt/solstudio12.3/bin/CC
   Fort compiler: f95
   Fort compiler abs: /opt/solstudio12.3/bin/f95
...



I can build a 64-bit version.

tyr openmpi-1.7-Linux.x86_64.64_cc 212 grep INTERNAL log.*
tyr openmpi-1.7-Linux.x86_64.64_cc 213 


I would be grateful, if somebody can fix the problem for the 32-bit
version. Thank you very much for any help in advance.


Kind regards

Siegmar



Re: [OMPI users] OpenMPI collective algorithm selection

2013-04-05 Thread Ralph Castain
You can get the headers installed by adding --with-devel-headers to the 
configure line.

On Apr 5, 2013, at 5:10 AM, chandan basu  wrote:

> Hi,
> 
> I want to use OpenMPI dynamic collective algorithm selection using rules 
> file, e.g.
> 
> mpirun --mca coll_tuned_use_dynamic_rules 1 
> --mca_coll_tuned_dynamic_rules_file rules.txt ./myexe
> 
> I can see some examples in earlier discussions (given below). My question is 
> how do I know the ID for the different collectives. I do not see  
> coll_tuned.h in the installation folder . Is there any command to find the ID 
> of an algorithm. I am particularly interested in Alltoallv. I have checked 
> that coll_tuned_alltoallv_algorithm 1 and coll_tuned_alltoallv_algorithm 2 
> has lot of performance difference depending on data size and comm size. So I 
> think giving a rules file can improve the performance over a range of data 
> sizes and comm size. Any help in this regard will be appreciated.
> 
> With regards,
> 
> -Chandan
> 
> Dr. Chandan Basu
> National Supercomputer Center
> Linköping University
> S-581 83 Linköping
> email: cb...@nsc.liu.se
> -
> >1 # num of collectives 
> >3 # ID = 3 Alltoall collective (ID in coll_tuned.h) 
> >2 # number of com sizes 
> >1 # comm size 1 
> >1 # number of msg sizes 1 
> >0 1 0 0 # for message size 0, linear 1, topo 0, 0 segmentation 
> >8 # comm size 8 
> >4 # number of msg sizes 
> >0 1 0 0 # for message size 0, linear 1, topo 0, 0 segmentation 
> >32768 2 0 0 # 32k, pairwise 2, no topo or segmentation 
> >262144 1 0 0 # 256k, use linear 1, no topo or segmentation 
> >524288 2 0 0 # message size 512k+, pairwise 2, topo 0, 0 segmentation 
> ># end of first collective 
> 
> 
> -- 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



[OMPI users] OpenMPI collective algorithm selection

2013-04-05 Thread chandan basu
Hi,

I want to use OpenMPI dynamic collective algorithm selection using rules
file, e.g.

mpirun --mca coll_tuned_use_dynamic_rules 1
--mca_coll_tuned_dynamic_rules_file rules.txt ./myexe

I can see some examples in earlier discussions (given below). My question
is how do I know the ID for the different collectives. I do not see
coll_tuned.h
in the installation folder . Is there any command to find the ID of an
algorithm. I am particularly interested in Alltoallv. I have checked that
coll_tuned_alltoallv_algorithm 1 and coll_tuned_alltoallv_algorithm 2 has
lot of performance difference depending on data size and comm size. So I
think giving a rules file can improve the performance over a range of data
sizes and comm size. Any help in this regard will be appreciated.

With regards,

-Chandan

Dr. Chandan Basu
National Supercomputer Center
Linköping University
S-581 83 Linköping
email: cb...@nsc.liu.se
-
>1 # num of collectives
>3 # ID = 3 Alltoall collective (ID in coll_tuned.h)
>2 # number of com sizes
>1 # comm size 1
>1 # number of msg sizes 1
>0 1 0 0 # for message size 0, linear 1, topo 0, 0 segmentation
>8 # comm size 8
>4 # number of msg sizes
>0 1 0 0 # for message size 0, linear 1, topo 0, 0 segmentation
>32768 2 0 0 # 32k, pairwise 2, no topo or segmentation
>262144 1 0 0 # 256k, use linear 1, no topo or segmentation
>524288 2 0 0 # message size 512k+, pairwise 2, topo 0, 0 segmentation
># end of first collective


--


Re: [OMPI users] problems building openmpi v 1.6.4 using a local build of gcc 4.7.2 on rhel6

2013-04-05 Thread Jeff Squyres (jsquyres)
It looks like you configured with gfortran 4.7.2 
(/nm/programs/third_party/gcc-4.7.2-rhel5/bin/gfortran).  

Did you change your path after that, such that a different gfortran was 
found/used to build Open MPI?

I ask because real*16 (etc.) were all found and used successfully in configure, 
but then failed when you built.  I'm guessing that this means that a different 
fortran compiler was used between configure and make.


On Apr 4, 2013, at 9:41 PM, Alan Sayre  wrote:

> I'm trying to build openmpi v.1.6.4 using a local build of gcc 4.7.2 on rhel6.
> 
> The configure and build scripts are attached. The config.log and build.output 
> are attached.
> 
> The last few lines of the build output is:
> 
> make[3]: Entering directory 
> `/nm/programs/third_party/tmp-install/openmpi-1.6.4-blgwap02/ompi/mpi/f77'
> make[3]: Nothing to be done for `all-am'.
> make[3]: Leaving directory 
> `/nm/programs/third_party/tmp-install/openmpi-1.6.4-blgwap02/ompi/mpi/f77'
> make[2]: Leaving directory 
> `/nm/programs/third_party/tmp-install/openmpi-1.6.4-blgwap02/ompi/mpi/f77'
> Making all in mpi/f90
> make[2]: Entering directory 
> `/nm/programs/third_party/tmp-install/openmpi-1.6.4-blgwap02/ompi/mpi/f90'
> make  all-recursive
> make[3]: Entering directory 
> `/nm/programs/third_party/tmp-install/openmpi-1.6.4-blgwap02/ompi/mpi/f90'
> Making all in scripts
> make[4]: Entering directory 
> `/nm/programs/third_party/tmp-install/openmpi-1.6.4-blgwap02/ompi/mpi/f90/scripts'
> make[4]: Nothing to be done for `all'.
> make[4]: Leaving directory 
> `/nm/programs/third_party/tmp-install/openmpi-1.6.4-blgwap02/ompi/mpi/f90/scripts'
> make[4]: Entering directory 
> `/nm/programs/third_party/tmp-install/openmpi-1.6.4-blgwap02/ompi/mpi/f90'
>   FC mpi.lo
>  In file mpi-f90-interfaces.h:1586
> 
>  Included at mpi.f90:37
> 
>   real*16, intent(in) :: x
> 1
> Error: Old-style type declaration REAL*16 not supported at (1)
>  In file mpi-f90-interfaces.h:1607
> 
>  Included at mpi.f90:37
> 
>   complex*32, intent(in) :: x
>1
> Error: Old-style type declaration COMPLEX*32 not supported at (1)
>  In file mpi-f90-interfaces.h:1670
> 
>  Included at mpi.f90:37
> 
>   real*16, dimension(*), intent(in) :: x
> 1
> Error: Old-style type declaration REAL*16 not supported at (1)
>  In file mpi-f90-interfaces.h:1691
> 
>  Included at mpi.f90:37
> 
>   complex*32, dimension(*), intent(in) :: x
>1
> Error: Old-style type declaration COMPLEX*32 not supported at (1)
>  In file mpi-f90-interfaces.h:1754
> 
>  Included at mpi.f90:37
> 
>   real*16, dimension(1,*), intent(in) :: x
> 1
> Error: Old-style type declaration REAL*16 not supported at (1)
>  In file mpi-f90-interfaces.h:1775
> 
>  Included at mpi.f90:37
> 
>   complex*32, dimension(1,*), intent(in) :: x
>1
> Error: Old-style type declaration COMPLEX*32 not supported at (1)
>  In file mpi-f90-interfaces.h:1838
> 
>  Included at mpi.f90:37
> 
>   real*16, dimension(1,1,*), intent(in) :: x
> 1
> Error: Old-style type declaration REAL*16 not supported at (1)
>  In file mpi-f90-interfaces.h:1859
> 
>  Included at mpi.f90:37
> 
>   complex*32, dimension(1,1,*), intent(in) :: x
>1
> Error: Old-style type declaration COMPLEX*32 not supported at (1)
>  In file mpi-f90-interfaces.h:1922
> 
>  Included at mpi.f90:37
> 
>   real*16, dimension(1,1,1,*), intent(in) :: x
> 1
> Error: Old-style type declaration REAL*16 not supported at (1)
>  In file mpi-f90-interfaces.h:1943
> 
>  Included at mpi.f90:37
> 
>   complex*32, dimension(1,1,1,*), intent(in) :: x
>1
> Error: Old-style type declaration COMPLEX*32 not supported at (1)
>  In file mpi-f90-interfaces.h:1946
> 
>  Included at mpi.f90:37
> 
> end subroutine MPI_Sizeof4DC32
>  1
> Error: Ambiguous interfaces 'mpi_sizeof4dc32' and 'mpi_sizeof4dr16' in 
> generic interface 'mpi_sizeof' at (1)
> make[4]: *** [mpi.lo] Error 1
> make[4]: Leaving directory 
> `/nm/programs/third_party/tmp-install/openmpi-1.6.4-blgwap02/ompi/mpi/f90'
> make[3]: *** [all-recursive] Error 1
> make[3]: Leaving directory 
> `/nm/programs/third_party/tmp-install/openmpi-1.6.4-blgwap02/ompi/mpi/f90'
> make[2]: *** [all] Error 2
> make[2]: Leaving directory 
> `/nm/programs/third_party/tmp-install/openmpi-1.6.4-blgwap02/ompi/mpi/f90'
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory 
> `/nm/programs/third_party/tmp-install/openmpi-1.6.4-blgwap02/ompi'
> make: *** [all-recursive] Error 1
> 
> 
> What I am doing wrong?
> 
> Thanks,
> 
> Alan
> <1_Warning.txt><2_Warning.txt><3_Warning.txt>___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] configure problem

2013-04-05 Thread Reuti
AFAICS problem is in the middle of (it's often not directly at the end):

Am 05.04.2013 um 03:21 schrieb Alan Sayre:

> 

configure:17615: g++ -o conftest -O3 -DNDEBUG -finline-functions   conftest.cpp 
 >&5
In file included from /usr/include/stdio.h:929:0,
 from conftest.cpp:141:
/usr/include/bits/stdio.h: In function '__ssize_t getline(char**, size_t*, 
FILE*)':
/usr/include/bits/stdio.h:118:52: error: '__getdelim' was not declared in this 
scope
configure:17615: $? = 1
configure: program exited with status 1

You compiled gcc 4.7.2 on your own with which version of the original gcc?

-- Reuti


[OMPI users] cannot build openmpi-1.9r28290 on Linux/Solaris

2013-04-05 Thread Siegmar Gross
Hi

today I tried to install openmpi-1.9r28290 and got the following errors.

Solaris 10, Sparc, Sun C 5.12, 32-bit version of openmpi
Solaris 10, x86_64, Sun C 5.12, 32-bit version of openmpi
Solaris 10, Sparc, Sun C 5.12, 64-bit version of openmpi
Solaris 10, x86_64, Sun C 5.12, 64-bit version of openmpi
-

...
  CC   topology-solaris.lo
"../../../../../../../openmpi-1.9r28290/opal/mca/hwloc/hwloc152/hwloc/src/topolo
gy-solaris.c", line 226: undefined symbol: binding
"../../../../../../../openmpi-1.9r28290/opal/mca/hwloc/hwloc152/hwloc/src/topolo
gy-solaris.c", line 227: undefined symbol: hwloc_set
"../../../../../../../openmpi-1.9r28290/opal/mca/hwloc/hwloc152/hwloc/src/topolo
gy-solaris.c", line 227: warning: improper pointer/integer combination: arg #1
cc: acomp failed for ../../../../../../../openmpi-1.9r28290/opal/mca/hwloc/hwloc
152/hwloc/src/topology-solaris.c
make[4]: *** [topology-solaris.lo] Error 1
...




openSuSE Linux 12.1, x86_64, Sun C 5.12, 32-bit version of openmpi
openSuSE Linux 12.1, x86_64, Sun C 5.12, 64-bit version of openmpi
--

...
  PPFC mpi-f08-sizeof.lo
  PPFC mpi-f08.lo
"../../../../../openmpi-1.9r28290/ompi/mpi/fortran/use-mpi-f08/mpi-f08.F90", Lin
e = 1, Column = 1: INTERNAL: Interrupt: Segmentation fault
make[2]: *** [mpi-f08.lo] Error 1
make[2]: Leaving directory `/export2/src/openmpi-1.9/openmpi-1.9-Linux.x86_64.32
_cc/ompi/mpi/fortran/use-mpi-f08'
make[1]: *** [all-recursive] Error 1
...


I could built an older version.

 Package: Open MPI root@linpc1 Distribution
Open MPI: 1.9r28209
  Open MPI repo revision: r28209
   Open MPI release date: Mar 25, 2013 (nightly snapshot tarball)
Open RTE: 1.9
  Open RTE repo revision: r28134M
   Open RTE release date: Feb 28, 2013
OPAL: 1.9
  OPAL repo revision: r28134M
   OPAL release date: Feb 28, 2013
 MPI API: 2.1
Ident string: 1.9r28209
  Prefix: /usr/local/ompi-java_64_cc
 Configured architecture: x86_64-unknown-linux-gnu
  Configure host: linpc1
   Configured by: root
   Configured on: Tue Mar 26 15:54:59 CET 2013
  Configure host: linpc1
Built by: root
Built on: Tue Mar 26 16:31:01 CET 2013
  Built host: linpc1
  C bindings: yes
C++ bindings: yes
 Fort mpif.h: yes (all)
Fort use mpi: yes (full: ignore TKR)
   Fort use mpi size: deprecated-ompi-info-value
Fort use mpi_f08: yes
 Fort mpi_f08 compliance: The mpi_f08 module is available, but due to 
limitations in the f95 compiler, does not support the following: array 
subsections, ABSTRACT INTERFACE function pointers, Fortran '08-specified 
ASYNCHRONOUS behavior, PROCEDUREs, direct passthru (where possible) to 
underlying Open MPI's C functionality
  Fort mpi_f08 subarrays: no
   Java bindings: yes
  C compiler: cc
 C compiler absolute: /opt/solstudio12.3/bin/cc
  C compiler family name: SUN
  C compiler version: 0x5120
C++ compiler: CC
   C++ compiler absolute: /opt/solstudio12.3/bin/CC
   Fort compiler: f95
   Fort compiler abs: /opt/solstudio12.3/bin/f95
 Fort ignore TKR: yes (!$PRAGMA IGNORE_TKR)
   Fort 08 assumed shape: no
  Fort optional args: yes
Fort BIND(C): yes
Fort PRIVATE: yes
   Fort ABSTRACT: no
   Fort ASYNCHRONOUS: no
  Fort PROCEDURE: no
 Fort f08 using wrappers: yes
 C profiling: yes
   C++ profiling: yes
   Fort mpif.h profiling: yes
  Fort use mpi profiling: yes
   Fort use mpi_f08 prof: yes
  C++ exceptions: yes
  Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support: yes, 
OMPI progress: no, ORTE progress: no, Event lib: no)
   Sparse Groups: no
  Internal debug support: yes
  MPI interface warnings: yes
 MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
 libltdl support: yes
   Heterogeneous support: yes
 mpirun default --prefix: no
 MPI I/O support: yes
   MPI_WTIME support: gettimeofday
 Symbol vis. support: yes
   Host topology support: yes
  MPI extensions: 
   FT Checkpoint support: no (checkpoint thread: no)
   C/R Enabled Debugging: no
 VampirTrace support: yes
  MPI_MAX_PROCESSOR_NAME: 256
MPI_MAX_ERROR_STRING: 256
 MPI_MAX_OBJECT_NAME: 64
MPI_MAX_INFO_KEY: 36
MPI_MAX_INFO_VAL: 256
   MPI_MAX_PORT_NAME: 1024
  MPI_MAX_DATAREP_STRING: 128
   MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.9)
   MCA event: libevent2019 (MCA v2.0, API v2.0, Component v1.9)
   MCA hwloc: hwloc152 (MCA v2.0, API v2.0, Component v1.9)
  MCA if: linux_ipv6 (MCA v2.0, API v