Re: [OMPI users] ARM/Allinea DDT

2018-04-12 Thread Charles A Taylor
Understood.  But since the OpenMPI versions in question are not listed as 
supported on the ARM/Allinea product page, I thought I’d ask if it is 
_supposed_ to work before bothering folks with details.

In the meantime, ARM/Allinea has responded so I’ll provide them with the 
details.  But, in short, DDT can’t seem to attach to the 3.0 processes/ranks 
and just hangs trying.  Using the same code, that doesn’t happen with OpenMPI 
1.10.2 nor IntelMPI 5.1.1.  This is under RHEL 7.4 and launching with srun 
under SLURM 16.05.11 (for those who want to know).

Thanks for the replies,

Charlie


> On Apr 11, 2018, at 3:15 PM, r...@open-mpi.org wrote:
> 
> You probably should provide a little more info here. I know the MPIR attach 
> was broken in the v2.x series, but we fixed that - could be something remains 
> broken in OMPI 3.x.
> 
> FWIW: I doubt it's an Allinea problem.
> 
>> On Apr 11, 2018, at 11:54 AM, Charles A Taylor  wrote:
>> 
>> 
>> Contacting ARM seems a bit difficult so I thought I would ask here.  We rely 
>> on DDT for debugging but it doesn’t work with OpenMPI 3.x and I can’t find 
>> anything about them having plans to support it.
>> 
>> Anyone know if ARM DDT has plans to support newer versions of OpenMPI?
>> 
>> Charlie Taylor
>> UF Research Computing
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.open-2Dmpi.org_mailman_listinfo_users&d=DwIGaQ&c=pZJPUDQ3SB9JplYbifm4nt2lEVG5pWx2KikqINpWlZM&r=HOtXciFqK5GlgIgLAxthUQ&m=ZXjA-iEShW778rarEcH8zfwC7ZZIe3E_nnmA3efkQ2U&s=YOADJaH3OfJGTO5VTSs9F9MoQv4cBZfJJXXW3RTN5Yg&e=
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.open-2Dmpi.org_mailman_listinfo_users&d=DwIGaQ&c=pZJPUDQ3SB9JplYbifm4nt2lEVG5pWx2KikqINpWlZM&r=HOtXciFqK5GlgIgLAxthUQ&m=ZXjA-iEShW778rarEcH8zfwC7ZZIe3E_nnmA3efkQ2U&s=YOADJaH3OfJGTO5VTSs9F9MoQv4cBZfJJXXW3RTN5Yg&e=

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

[OMPI users] error building openmpi-master-201804110351-664ba32 on Linux with Sun C

2018-04-12 Thread Siegmar Gross

Hi,

I've tried to install openmpi-master-201804110351-664ba32 on my "SUSE Linux
Enterprise Server 12.3 (x86_64)" with Sun C 5.15 (Oracle Developer Studio
12.6). Unfortunately I get the following error.


loki openmpi-master-201804110351-664ba32-Linux.x86_64.64_cc 115 head -7 
config.log | tail -1
  $ ../openmpi-master-201804110351-664ba32/configure 
--prefix=/usr/local/openmpi-master_64_cc 
--libdir=/usr/local/openmpi-master_64_cc/lib64 
--with-jdk-bindir=/usr/local/jdk-9/bin 
--with-jdk-headers=/usr/local/jdk-9/include JAVA_HOME=/usr/local/jdk-9 
LDFLAGS=-m64 -mt -Wl,-z -Wl,noexecstack -L/usr/local/lib64 CC=cc CXX=CC FC=f95 
CFLAGS=-m64 -mt -I/usr/local/include CXXFLAGS=-m64 -I/usr/local/include 
FCFLAGS=-m64 CPP=cpp -I/usr/local/include CXXCPP=cpp -I/usr/local/include 
--enable-mpi-cxx --enable-cxx-exceptions --enable-mpi-java 
--with-valgrind=/usr/local/valgrind --with-hwloc=internal --without-verbs 
--with-wrapper-cflags=-m64 -mt --with-wrapper-cxxflags=-m64 
--with-wrapper-fcflags=-m64 --with-wrapper-ldflags=-mt --enable-debug



loki openmpi-master-201804110351-664ba32-Linux.x86_64.64_cc 116 tail -20 
log.make.Linux.x86_64.64_cc

  PPFC add_error_string_f08.lo
  PPFC aint_add_f08.lo
  PPFC aint_diff_f08.lo
  PPFC allgather_f08.lo

   OMPI_FORTRAN_IGNORE_TKR_TYPE, INTENT(IN), ASYNCHRONOUS :: origin_addr
 ^
"../../../../../openmpi-master-201804110351-664ba32/ompi/mpi/fortran/use-mpi-f08/accumulate_f08.F90", 
Line = 16, Column = 46: ERROR: Attributes ASYNCHRONOUS and INTENT must not 
appear in the same attribute list.


f90comp: 190 SOURCE LINES
f90comp: 1 ERRORS, 0 WARNINGS, 0 OTHER MESSAGES, 0 ANSI
Makefile:4416: recipe for target 'accumulate_f08.lo' failed
make[2]: *** [accumulate_f08.lo] Error 1
make[2]: *** Waiting for unfinished jobs
make[2]: Leaving directory 
'/export2/src/openmpi-master/openmpi-master-201804110351-664ba32-Linux.x86_64.64_cc/ompi/mpi/fortran/use-mpi-f08'

Makefile:3492: recipe for target 'all-recursive' failed
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory 
'/export2/src/openmpi-master/openmpi-master-201804110351-664ba32-Linux.x86_64.64_cc/ompi'

Makefile:1893: recipe for target 'all-recursive' failed
make: *** [all-recursive] Error 1
loki openmpi-master-201804110351-664ba32-Linux.x86_64.64_cc 117


I would be grateful, if somebody can fix the problem. Do you need anything
else? Thank you very much for any help in advance.


Kind regards

Siegmar
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


[OMPI users] CfP for VHPC ‘18 - Papers due May 15 (extended) for the 13th Virtualization in High­-Performance Cloud Computing Workshop

2018-04-12 Thread VHPC 18
*CALL
FOR PAPERS 13th Workshop on Virtualization in High­-Performance Cloud
Computing  (VHPC '18)held in conjunction with the International
Supercomputing Conference - High Performance,June 24-28, 2018, Frankfurt,
Germany.(Springer LNCS Proceedings)
Date:
June 28, 2018Workshop URL: http://vhpc.org Paper
Submission Deadline: May 15, 2018 (extended)Springer LNCS, rolling abstract
submissionAbstract/Paper Submission Link:
https://edas.info/newPaper.php?c=24355
Special Track: GPU - Accelerator
Virtualization Call for PapersVirtualization technologies constitute a key
enabling factor for flexible resource managementin modern data centers, and
particularly in cloud environments. Cloud providers need tomanage complex
infrastructures in a seamless fashion to support the highly dynamic
andheterogeneous workloads and hosted applications customers deploy.
Similarly, HPCenvironments have been increasingly adopting techniques that
enable flexible managementof vast computing and networking resources, close
to marginal provisioning cost, which isunprecedented in the history of
scientific and commercial computing.Various virtualization technologies
contribute to the overall picture in different ways: machinevirtualization,
with its capability to enable consolidation of multiple under­utilized
servers withheterogeneous software and operating systems (OSes), and its
capability to live­-migrate afully operating virtual machine (VM) with a
very short downtime, enables novel and dynamicways to manage physical
servers; OS-­level virtualization (i.e., containerization), with
itscapability to isolate multiple user­-space environments and to allow for
their co­existencewithin the same OS kernel, promises to provide many of
the advantages of machine virtualization with high levels of responsiveness
and performance; I/O Virtualization allows physical network interfaces to
take traffic from multiple VMs or containers; network virtualization, with
its capability to create logical network overlays that are independent of
theunderlying physical topology is furthermore enabling virtualization of
HPC infrastructures. PublicationAccepted papers will be published in a
Springer LNCS proceedings volume.Topics of InterestThe VHPC program
committee solicits original, high-quality submissions related
tovirtualization across the entire software stack with a special focus on
the intersection of HPCand the cloud.Major Topics- Virtualization in
supercomputing environments, HPC clusters, HPC in the cloud and grids-
OS-level virtualization and containers (LXC, Docker, rkt, Singularity,
Shifter, i.a.)- Lightweight/specialized operating systems in conjunction
with virtual machines- Novel unikernels and use cases for virtualized HPC
environments- Performance improvements for or driven by unikernels- Tool
support for unikernels: configuration/build environments, debuggers,
profilers- Hypervisor extensions to mitigate side-channel attacks
 ([micro-]architectural timing attacks, privilege escalation)- VM &
Container trust and security- Containers inside VMs with hypervisor
isolation- GPU virtualization operationalization- Approaches to GPGPU
virtualization including API remoting and hypervisor abstraction-
Optimizations of virtual machine monitor platforms and hypervisors-
Hypervisor support for heterogeneous resources (GPUs, co-processors, FPGAs,
etc.)- Virtualization support for emerging memory technologies-
Virtualization in enterprise HPC and microvisors- Software defined networks
and network virtualization- Management, deployment of virtualized
environments and orchestration (Kubernetes i.a.)- Workflow-pipeline
container-based composability - Checkpointing facilitation utilizing
containers and VMs - Emerging topics including multi-kernel approaches and
NUMA in hypervisors- Operating MPI in containers/VMs and Unikernels  -
Virtualization in data intensive computing (big data) - HPC convergence-
Adaptation of HPC technologies in the cloud (high performance networks,
RDMA, etc.)- Performance measurement, modelling and monitoring of
virtualized/cloud workloads- Latency-and jitter sensitive workloads in
virtualized/containerized environments- I/O virtualization (including
applications, SR-IOV, i.a.) - Hybrid local facility + cloud compute and
based storage systems, cloudbursting- FPGA and many-core accelerator
virtualization- Job scheduling/control/policy and container placement in
virtualized environments- Cloud reliability, fault-tolerance and
high-availability- QoS and SLA in virtualized environments- IaaS platforms,
cloud frameworks and APIs- Energy-efficient and power-aware virtualization-
Configuration management tools for containers (including in OpenStack,
Ansible, i.a.)- ARM-based hypervisors, ARM virtualization extensionsSpecial
Track: GPU - Accelerator Virt

[OMPI users] error building openmpi-3.0.1 on Linux with gcc or Sun C and Java-10

2018-04-12 Thread Siegmar Gross

Hi,

I've tried to install openmpi-3.0.1 on my "SUSE Linux Enterprise Server 12.3
(x86_64)" with gcc-6.4.0 or Sun C 5.15 and Java-10. Unfortunately I get the
following error. I can build it with both C compilers and Java-9 and I can
build openmpi-v3.1.x-201804110302-be0843e and 
openmpi-master-201804110351-664ba32
with both compilers and Java-10.

loki openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10 159 head -7 config.log | tail -1
  $ ../openmpi-3.0.1/configure --prefix=/usr/local/openmpi-3.0.1_64_gcc 
--libdir=/usr/local/openmpi-3.0.1_64_gcc/lib64 
--with-jdk-bindir=/usr/local/jdk-10/bin 
--with-jdk-headers=/usr/local/jdk-10/include JAVA_HOME=/usr/local/jdk-10 
LDFLAGS=-m64 CC=gcc CXX=g++ FC=gfortran CFLAGS=-m64 CXXFLAGS=-m64 FCFLAGS=-m64 
CPP=cpp CXXCPP=cpp --enable-mpi-cxx --enable-cxx-exceptions --enable-mpi-java 
--with-cuda=/usr/local/cuda --with-valgrind=/usr/local/valgrind 
--with-hwloc=internal --without-verbs --with-wrapper-cflags=-std=c11 -m64 
--with-wrapper-cxxflags=-m64 --with-wrapper-fcflags=-m64 --enable-debug




loki openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10 170 more 
log.make.Linux.x86_64.64_gcc
Making all in config
make[1]: Entering directory 
'/export2/src/openmpi-3.0.1/openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10/config'

...
/JAVAH
...skipping
make[3]: Entering directory 
'/export2/src/openmpi-3.0.1/openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10/ompi/mpi/java/java'

  JAVACMPI.class
  JAVAHmpi_MPI.h
  JAVAHmpi_CartParms.h
  JAVAHmpi_CartComm.h
  JAVAHmpi_Constant.h
  JAVAHmpi_Comm.h
  JAVAHmpi_Count.h
Error: Could not find class file for 'mpi.CartParms'.
Makefile:2178: recipe for target 'mpi_CartParms.h' failed
make[3]: *** [mpi_CartParms.h] Error 1
make[3]: *** Waiting for unfinished jobs
Error: Could not find class file for 'mpi.MPI'.
Makefile:2178: recipe for target 'mpi_MPI.h' failed
make[3]: *** [mpi_MPI.h] Error 1
Error: Could not find class file for 'mpi.Comm'.
Error: Could not find class file for 'mpi.Constant'.
Error: Could not find class file for 'mpi.CartComm'.
Makefile:2178: recipe for target 'mpi_Comm.h' failed
make[3]: *** [mpi_Comm.h] Error 1
Makefile:2178: recipe for target 'mpi_Constant.h' failed
make[3]: *** [mpi_Constant.h] Error 1
Error: Could not find class file for 'mpi.Count'.
Makefile:2178: recipe for target 'mpi_CartComm.h' failed
make[3]: *** [mpi_CartComm.h] Error 1
Makefile:2178: recipe for target 'mpi_Count.h' failed
make[3]: *** [mpi_Count.h] Error 1
make[3]: Leaving directory 
'/export2/src/openmpi-3.0.1/openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10/ompi/mpi/java/java'

Makefile:1720: recipe for target 'all-recursive' failed
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory 
'/export2/src/openmpi-3.0.1/openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10/ompi/mpi/java'

Makefile:3421: recipe for target 'all-recursive' failed
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory 
'/export2/src/openmpi-3.0.1/openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10/ompi'

Makefile:1873: recipe for target 'all-recursive' failed
make: *** [all-recursive] Error 1
loki openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10 171


loki openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10 161 ls -l 
ompi/mpi/java/java/mpi/CartComm.class

-rw-r--r-- 1 root root 2621 Apr 12 16:46 ompi/mpi/java/java/mpi/CartComm.class
loki openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10 162





Java-9:
===

loki openmpi-3.0.1-Linux.x86_64.64_gcc 166 head -7 config.log | tail -1
  $ ../openmpi-3.0.1/configure --prefix=/usr/local/openmpi-3.0.1_64_gcc 
--libdir=/usr/local/openmpi-3.0.1_64_gcc/lib64 
--with-jdk-bindir=/usr/local/jdk-9/bin 
--with-jdk-headers=/usr/local/jdk-9/include JAVA_HOME=/usr/local/jdk-9 
LDFLAGS=-m64 CC=gcc CXX=g++ FC=gfortran CFLAGS=-m64 CXXFLAGS=-m64 FCFLAGS=-m64 
CPP=cpp CXXCPP=cpp --enable-mpi-cxx --enable-cxx-exceptions --enable-mpi-java 
--with-cuda=/usr/local/cuda --with-valgrind=/usr/local/valgrind 
--with-hwloc=internal --without-verbs --with-wrapper-cflags=-std=c11 -m64 
--with-wrapper-cxxflags=-m64 --with-wrapper-fcflags=-m64 --enable-debug



loki openmpi-3.0.1-Linux.x86_64.64_gcc 168 more log.make.Linux.x86_64.64_gcc
Making all in config
make[1]: Entering directory 
'/export2/src/openmpi-3.0.1/openmpi-3.0.1-Linux.x86_64.64_gcc/config'

...
/JAVAH
...skipping
make[3]: Entering directory 
'/export2/src/openmpi-3.0.1/openmpi-3.0.1-Linux.x86_64.64_gcc/ompi/mpi/java/java'

  JAVACMPI.class
  JAVAHmpi_MPI.h
  JAVAHmpi_CartParms.h
  JAVAHmpi_Comm.h
  JAVAHmpi_Count.h
  JAVAHmpi_Constant.h
  JAVAHmpi_CartComm.h

Warning: The javah tool is planned to be removed in the next major
JDK release. The tool has been superseded by the '-h' option added
to javac in JDK 8. Users are recommended to migrate to using the
javac '-h' option; see the javac man page for more information.
...


I would be grateful, if somebody can fix the problem for Java-10. Do you
need anything else? Thank you very much for any help in advance.


Kind regards

Siegmar



smime.p7s
Desc

Re: [OMPI users] error building openmpi-3.0.1 on Linux with gcc or Sun C and Java-10

2018-04-12 Thread Kawashima, Takahiro
Siegmar,

Thanks for your report. But it is a known issue and will be fixed in Open MPI 
v3.0.2.

  https://github.com/open-mpi/ompi/pull/5029

If you want it now in v3.0.x series, try the latest nightly snapshot.

  https://www.open-mpi.org/nightly/v3.0.x/

Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu

> Hi,
> 
> I've tried to install openmpi-3.0.1 on my "SUSE Linux Enterprise Server 12.3
> (x86_64)" with gcc-6.4.0 or Sun C 5.15 and Java-10. Unfortunately I get the
> following error. I can build it with both C compilers and Java-9 and I can
> build openmpi-v3.1.x-201804110302-be0843e and 
> openmpi-master-201804110351-664ba32
> with both compilers and Java-10.
> 
> loki openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10 159 head -7 config.log | tail -1
>$ ../openmpi-3.0.1/configure --prefix=/usr/local/openmpi-3.0.1_64_gcc 
> --libdir=/usr/local/openmpi-3.0.1_64_gcc/lib64 
> --with-jdk-bindir=/usr/local/jdk-10/bin 
> --with-jdk-headers=/usr/local/jdk-10/include JAVA_HOME=/usr/local/jdk-10 
> LDFLAGS=-m64 CC=gcc CXX=g++ FC=gfortran CFLAGS=-m64 CXXFLAGS=-m64 
> FCFLAGS=-m64 
> CPP=cpp CXXCPP=cpp --enable-mpi-cxx --enable-cxx-exceptions --enable-mpi-java 
> --with-cuda=/usr/local/cuda --with-valgrind=/usr/local/valgrind 
> --with-hwloc=internal --without-verbs --with-wrapper-cflags=-std=c11 -m64 
> --with-wrapper-cxxflags=-m64 --with-wrapper-fcflags=-m64 --enable-debug
> 
> 
> 
> loki openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10 170 more 
> log.make.Linux.x86_64.64_gcc
> Making all in config
> make[1]: Entering directory 
> '/export2/src/openmpi-3.0.1/openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10/config'
> ...
> /JAVAH
> ...skipping
> make[3]: Entering directory 
> '/export2/src/openmpi-3.0.1/openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10/ompi/mpi/java/java'
>JAVACMPI.class
>JAVAHmpi_MPI.h
>JAVAHmpi_CartParms.h
>JAVAHmpi_CartComm.h
>JAVAHmpi_Constant.h
>JAVAHmpi_Comm.h
>JAVAHmpi_Count.h
> Error: Could not find class file for 'mpi.CartParms'.
> Makefile:2178: recipe for target 'mpi_CartParms.h' failed
> make[3]: *** [mpi_CartParms.h] Error 1
> make[3]: *** Waiting for unfinished jobs
> Error: Could not find class file for 'mpi.MPI'.
> Makefile:2178: recipe for target 'mpi_MPI.h' failed
> make[3]: *** [mpi_MPI.h] Error 1
> Error: Could not find class file for 'mpi.Comm'.
> Error: Could not find class file for 'mpi.Constant'.
> Error: Could not find class file for 'mpi.CartComm'.
> Makefile:2178: recipe for target 'mpi_Comm.h' failed
> make[3]: *** [mpi_Comm.h] Error 1
> Makefile:2178: recipe for target 'mpi_Constant.h' failed
> make[3]: *** [mpi_Constant.h] Error 1
> Error: Could not find class file for 'mpi.Count'.
> Makefile:2178: recipe for target 'mpi_CartComm.h' failed
> make[3]: *** [mpi_CartComm.h] Error 1
> Makefile:2178: recipe for target 'mpi_Count.h' failed
> make[3]: *** [mpi_Count.h] Error 1
> make[3]: Leaving directory 
> '/export2/src/openmpi-3.0.1/openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10/ompi/mpi/java/java'
> Makefile:1720: recipe for target 'all-recursive' failed
> make[2]: *** [all-recursive] Error 1
> make[2]: Leaving directory 
> '/export2/src/openmpi-3.0.1/openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10/ompi/mpi/java'
> Makefile:3421: recipe for target 'all-recursive' failed
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory 
> '/export2/src/openmpi-3.0.1/openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10/ompi'
> Makefile:1873: recipe for target 'all-recursive' failed
> make: *** [all-recursive] Error 1
> loki openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10 171
> 
> 
> loki openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10 161 ls -l 
> ompi/mpi/java/java/mpi/CartComm.class
> -rw-r--r-- 1 root root 2621 Apr 12 16:46 ompi/mpi/java/java/mpi/CartComm.class
> loki openmpi-3.0.1-Linux.x86_64.64_gcc_jdk-10 162
> 
> 
> 
> 
> 
> Java-9:
> ===
> 
> loki openmpi-3.0.1-Linux.x86_64.64_gcc 166 head -7 config.log | tail -1
>$ ../openmpi-3.0.1/configure --prefix=/usr/local/openmpi-3.0.1_64_gcc 
> --libdir=/usr/local/openmpi-3.0.1_64_gcc/lib64 
> --with-jdk-bindir=/usr/local/jdk-9/bin 
> --with-jdk-headers=/usr/local/jdk-9/include JAVA_HOME=/usr/local/jdk-9 
> LDFLAGS=-m64 CC=gcc CXX=g++ FC=gfortran CFLAGS=-m64 CXXFLAGS=-m64 
> FCFLAGS=-m64 
> CPP=cpp CXXCPP=cpp --enable-mpi-cxx --enable-cxx-exceptions --enable-mpi-java 
> --with-cuda=/usr/local/cuda --with-valgrind=/usr/local/valgrind 
> --with-hwloc=internal --without-verbs --with-wrapper-cflags=-std=c11 -m64 
> --with-wrapper-cxxflags=-m64 --with-wrapper-fcflags=-m64 --enable-debug
> 
> 
> loki openmpi-3.0.1-Linux.x86_64.64_gcc 168 more log.make.Linux.x86_64.64_gcc
> Making all in config
> make[1]: Entering directory 
> '/export2/src/openmpi-3.0.1/openmpi-3.0.1-Linux.x86_64.64_gcc/config'
> ...
> /JAVAH
> ...skipping
> make[3]: Entering directory 
> '/export2/src/openmpi-3.0.1/openmpi-3.0.1-Linux.x86_64.64_gcc/ompi/mpi/java/java'
>JAVACMPI.class
>JAVAHmpi_MPI.h
>JAVAHmpi_CartParms.h
> 

[OMPI users] OMPI sendrecv bugs?

2018-04-12 Thread Kaiming Ouyang
Hi all,
I am trying to test the bandwidth of intra-MPI send and recv. The code is
attached here. When I give the input 2048 (namely each process will send
and receive 2GB data), the program reported:
Read 2147479552, expected 2147483648, errno = 95
Read 2147479552, expected 2147483648, errno = 98
Read 2147479552, expected 2147483648, errno = 98
Read 2147479552, expected 2147483648, errno = 98

Does this mean Openmpi does not support the send and recv where data size
is larger than 2GB, or is there a bug in my code? Thank you.
#include 
#include 
#include 
#include 
#include 


int main(int argc, char *argv[]) {
	int count;
	float voltage;
	int *in;
	int i;
	int rank, size;

	MPI_Init(&argc, &argv);
	MPI_Comm_size(MPI_COMM_WORLD, &size);
	MPI_Comm_rank(MPI_COMM_WORLD, &rank);
	long mb = atol(argv[1]);
	count = mb / 4 * 1024 * 1024;
	in = (int *)malloc( count * sizeof(int));
	for (i = 0; i < count; i++) {
		*(in + i) = i;
	}
	MPI_Barrier(MPI_COMM_WORLD);
	float time = MPI_Wtime();
	if ((rank & 1) == 0) {
		MPI_Send(in, count, MPI_INT, rank + 1, 0, MPI_COMM_WORLD);
	} else {
		MPI_Status status;
		MPI_Recv(in, count, MPI_INT, rank - 1, 0, MPI_COMM_WORLD, &status);
	}

	if ((rank & 1) == 1) {
		MPI_Send(in, count, MPI_INT, rank - 1, 0, MPI_COMM_WORLD);
	} else {
		MPI_Status status;
		MPI_Recv(in, count, MPI_INT, rank + 1, 0, MPI_COMM_WORLD, &status);
	}

	time = MPI_Wtime() - time;
	free( in );
	MPI_Finalize();
	return 0;
}
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] OMPI sendrecv bugs?

2018-04-12 Thread Gilles Gouaillardet
Which version of Open MPI are you running ?
This reminds me of a bug in CMA that has already been fixed.


can you try again with


mpirun --mca btl_vader_single_copy_mechanism none ...


Cheers,

Gilles

On Fri, Apr 13, 2018 at 1:51 PM, Kaiming Ouyang  wrote:
> Hi all,
> I am trying to test the bandwidth of intra-MPI send and recv. The code is
> attached here. When I give the input 2048 (namely each process will send and
> receive 2GB data), the program reported:
> Read 2147479552, expected 2147483648, errno = 95
> Read 2147479552, expected 2147483648, errno = 98
> Read 2147479552, expected 2147483648, errno = 98
> Read 2147479552, expected 2147483648, errno = 98
>
> Does this mean Openmpi does not support the send and recv where data size is
> larger than 2GB, or is there a bug in my code? Thank you.
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


[OMPI users] Fwd: problem related ORTE

2018-04-12 Thread Ankita m
On Wed, Apr 11, 2018 at 3:55 PM, Ankita m  wrote:

> Hello Sir
>
> Currently i am using version "openmpi-1.4.5". While submitting a parallel
> program it fails generating the below error file which i have attached. I
> think this is a run-time problem.
> Therefore i have attached a zip file in which all the files are being
> given as asked in the link   https://www.open-mpi.org/community/help/
>
> Regards
> Ankita Maity
> IIT Roorkee
> India
>
> On Fri, Apr 6, 2018 at 9:55 PM, Jeff Squyres (jsquyres) <
> jsquy...@cisco.com> wrote:
>
>> Can you please send all the information listed here:
>>
>> https://www.open-mpi.org/community/help/
>>
>> Thanks!
>>
>>
>> > On Apr 6, 2018, at 8:27 AM, Ankita m  wrote:
>> >
>> > Hello Sir/Madam
>> >
>> > I am Ankita Maity, a PhD scholar from Mechanical Dept., IIT Roorkee,
>> India
>> >
>> > I am facing a problem while submitting a parallel program to the HPC
>> cluster available in our dept.
>> >
>> > I have attached the error file its showing during the time of run.
>> >
>> > Can You please help me with the issue. I will be very much grateful to
>> you.
>> >
>> > With Regards
>> >
>> > ANKITA MAITY
>> > IIT ROORKEE
>> > INDIA
>> > ___
>> > users mailing list
>> > users@lists.open-mpi.org
>> > https://lists.open-mpi.org/mailman/listinfo/users
>>
>>
>> --
>> Jeff Squyres
>> jsquy...@cisco.com
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>
>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users