[OMPI users] (no subject)

2018-10-31 Thread Dmitry N. Mikushin
Dear all, ompi_info reports pml components are available: $ /usr/mpi/gcc/openmpi-3.1.0rc2/bin/ompi_info -a | grep pml MCA pml: v (MCA v2.1.0, API v2.0.0, Component v3.1.0) MCA pml: monitoring (MCA v2.1.0, API v2.0.0, Component v3.1.0) MCA pml:

Re: [OMPI users] EBADF (Bad file descriptor) on a simplest "Hello world" program

2018-06-02 Thread Dmitry N. Mikushin
ping 2018-06-01 22:29 GMT+03:00 Dmitry N. Mikushin : > Dear all, > > Looks like I have a weird issue never encountered before. While trying to > run simplest "Hello world" program, I get: > > $ cat hello.c > #include > > int main(int argc, char* argv[])

[OMPI users] EBADF (Bad file descriptor) on a simplest "Hello world" program

2018-06-01 Thread Dmitry N. Mikushin
Dear all, Looks like I have a weird issue never encountered before. While trying to run simplest "Hello world" program, I get: $ cat hello.c #include int main(int argc, char* argv[]) { MPI_Init(, ); MPI_Finalize(); return 0; } $ mpicc hello.c -o hello $ mpirun -np 1 ./hello

Re: [OMPI users] Crash in libopen-pal.so

2017-06-19 Thread Dmitry N. Mikushin
Hi Justin, If you can build application in debug mode, try inserting valgrind into your MPI command. It's usually very good in tracking down failing memory allocations origins. Kind regards, - Dmitry. 2017-06-20 1:10 GMT+03:00 Sylvain Jeaugey : > Justin, can you try

Re: [OMPI users] MPI + system() call + Matlab MEX crashes

2016-10-05 Thread Dmitry N. Mikushin
Hi Juraj, Although MPI infrastructure may technically support forking, it's known that not all system resources can correctly replicate themselves to forked process. For example, forking inside MPI program with active CUDA driver will result into crash. Why not to compile down the MATLAB into a

Re: [OMPI users] MPI_File_write hangs on NFS-mounted filesystem

2013-11-07 Thread Dmitry N. Mikushin
Not sure if this is related, but: I've seen a case of performance degradation on NFS and Lustre when writing NetCDF files. The reason was that the file was filled with a loop writing one 4-byte record at once. Performance became close to local hard drive, when I simply introduced buffering of

Re: [OMPI users] Stream interactions in CUDA

2012-12-12 Thread Dmitry N. Mikushin
Hi Justin, Quick grepping reveals several cuMemcpy calls in OpenMPI. Some of them are even synchronous, meaning stream0. I think the best way of exploring this sort of behavior is to execute OpenMPI runtime (thanks to its open-source nature!) under debugger. Rebuild OpenMPI with -g -O0, add some

Re: [OMPI users] fork in Fortran

2012-08-30 Thread Dmitry N. Mikushin
Hi, Modern Fortran has a feature called ISO_C_BINDING. It essentially allows to declare a binding of external function to be used from Fortran program. You only need to provide a corresponding interface. ISO_C_BINDING module contains C-like extensions in type system, but you don't need them, as

Re: [OMPI users] bug in CUDA support for dual-processor systems?

2012-08-02 Thread Dmitry N. Mikushin
Hi Zbigniew, > a) I noticed that on my 6-GPU 2-CPU platform the initialization of CUDA 4.2 > takes a long time, approx 10 seconds. > Do you think I should report this as a bug to nVidia? This is an expected time for creation of driver contexts on so many devices. I'm sure NVIDIA already

Re: [OMPI users] undefined reference to `netcdf_mp_nf90_open_'

2012-06-26 Thread Dmitry N. Mikushin
Dear Syed, Why do you think it is related to MPI? You seem to be compiling the COSMO model, which depends on netcdf lib, but the symbols are not passed to linker by some reason. Two main reasons are: (1) the library linking flag is missing (check you have something like -lnetcdf -lnetcdff in

Re: [OMPI users] NVCC mpi.h: error: attribute "__deprecated__" does not take arguments

2012-06-19 Thread Dmitry N. Mikushin
18, 2012 11:00 AM > > *To:* Open MPI Users > *Cc:* Олег Рябков > *Subject:* Re: [OMPI users] NVCC mpi.h: error: attribute "__deprecated__" > does not take arguments > > ** ** > > Hi Dmitry: > > Let me look into this. > > ** ** > > R

Re: [OMPI users] NVCC mpi.h: error: attribute "__deprecated__" does not take arguments

2012-06-18 Thread Dmitry N. Mikushin
Yeah, definitely. Thank you, Jeff. - D. 2012/6/18 Jeff Squyres <jsquy...@cisco.com> > On Jun 18, 2012, at 10:41 AM, Dmitry N. Mikushin wrote: > > > No, I'm configuring with gcc, and for openmpi-1.6 it works with nvcc > without a problem. > > Then I think Rolf

Re: [OMPI users] NVCC mpi.h: error: attribute "__deprecated__" does not take arguments

2012-06-18 Thread Dmitry N. Mikushin
MPI with one compiler and then trying to compile > with another (like the command line in your mail implies), all bets are off > because Open MPI has tuned itself to the compiler that it was configured > with. > > > > > On Jun 18, 2012, at 10:20 AM, Dmitry N. Mikushin wrote: > &

[OMPI users] NVCC mpi.h: error: attribute "__deprecated__" does not take arguments

2012-06-18 Thread Dmitry N. Mikushin
Hello, With openmpi svn trunk as of Repository Root: http://svn.open-mpi.org/svn/ompi Repository UUID: 63e3feb5-37d5-0310-a306-e8a459e722fe Revision: 26616 we are observing the following strange issue (see below). How do you think, is it a problem of NVCC or OpenMPI? Thanks, - Dima.

Re: [OMPI users] starting open-mpi

2012-05-11 Thread Dmitry N. Mikushin
Hi Ghobad, The error message means the OpenMPI wants to use cl.exe - the compiler from Microsoft Visual Studio. Here http://www.open-mpi.org/software/ompi/v1.5/ms-windows.php is it stated: This is the first binary release for Windows, with basic MPI libraries and executables. The supported

Re: [OMPI users] possibly undefined macro: AC_PROG_LIBTOOL

2011-12-29 Thread Dmitry N. Mikushin
the Autoconf documentation. autoreconf: /usr/bin/autoconf failed with exit status: 1 Command failed: ./autogen.sh Does it work for you with 2.67? Thanks, - D. 2011/12/30 Ralph Castain <r...@open-mpi.org>: > > On Dec 29, 2011, at 3:39 PM, Dmitry N. Mikushin wrote: > >> No, that wa

Re: [OMPI users] possibly undefined macro: AC_PROG_LIBTOOL

2011-12-29 Thread Dmitry N. Mikushin
t is way too old for us. However, what you just sent now shows > 2.67, which would be fine. > > Why the difference? > > > On Dec 29, 2011, at 3:27 PM, Dmitry N. Mikushin wrote: > >> Hi Ralph, >> >> URL: http://svn.open-mpi.org/svn/ompi/trunk >> Reposi

Re: [OMPI users] possibly undefined macro: AC_PROG_LIBTOOL

2011-12-29 Thread Dmitry N. Mikushin
eet the minimum required > levels? The requirements differ by version. > > On Dec 29, 2011, at 2:52 PM, Dmitry N. Mikushin wrote: > >> Dear Open MPI Community, >> >> I need a custom OpenMPI build. While running ./autogen.pl on Debian >> Squeeze, there is an err

[OMPI users] possibly undefined macro: AC_PROG_LIBTOOL

2011-12-29 Thread Dmitry N. Mikushin
Dear Open MPI Community, I need a custom OpenMPI build. While running ./autogen.pl on Debian Squeeze, there is an error: --- Found autogen.sh; running... autoreconf2.50: Entering directory `.' autoreconf2.50: configure.in: not using Gettext autoreconf2.50: running: aclocal --force -I m4

Re: [OMPI users] How "CUDA Init prior to MPI_Init" co-exists with unique GPU for each MPI process?

2011-12-14 Thread Dmitry N. Mikushin
already have all MPI processes (you can check by adding a sleep or > something like that), but they are not synchronized and do not know each > other. This is what MPI_Init is used for. > > > > Matthieu Brucher > > 2011/12/14 Dmitry N. Mikushin <maemar...@gmail.com> &

[OMPI users] How "CUDA Init prior to MPI_Init" co-exists with unique GPU for each MPI process?

2011-12-14 Thread Dmitry N. Mikushin
Dear colleagues, For GPU Winter School powered by Moscow State University cluster "Lomonosov", the OpenMPI 1.7 was built to test and popularize CUDA capabilities of MPI. There is one strange warning I cannot understand: OpenMPI runtime suggests to initialize CUDA prior to MPI_Init. Sorry, but how

Re: [OMPI users] configure with cuda

2011-10-27 Thread Dmitry N. Mikushin
> CUDA is an Nvidia-only technology, so it might be a bit limiting in some > cases. I think here it's more a question of compatibility (that is ~ 1.0 / [magnitude of effort]), rather than corporate selfishness >:) Consider memory buffers implementation - counter to CUDA in OpenCL they are some

Re: [OMPI users] OpenMPI with CPU of different speed.

2011-10-05 Thread Dmitry N. Mikushin
Hi, Maybe Mickaël means load balancing could be achieved simply by spawning various number of MPI processes, depending on how many cores particular node has? This should be possible, but accuracy of such balancing will be task-dependent due to other factors, like memory operations and

Re: [OMPI users] [SOLVED] unresolvable R_X86_64_64 relocation against symbol `mpi_fortran_*

2011-10-03 Thread Dmitry N. Mikushin
ave specs file into compiler's folder /usr/lib/gcc/// For example, in case of Ubuntu 10.10 with gcc 4.6.1 it's /usr/lib/gcc/x86_64-linux-gnu/4.6.1/ With this change no unresolvable relocations anymore! - D. 2011/10/3 Dmitry N. Mikushin <maemar...@gmail.com>: > Hi, > > Here's a repro

Re: [OMPI users] unresolvable R_X86_64_64 relocation against symbol `mpi_fortran_*

2011-10-03 Thread Dmitry N. Mikushin
ntu/Linaro 4.6.1-9ubuntu3) 2011/9/28 Dmitry N. Mikushin <maemar...@gmail.com>: > Hi, > > Interestingly, the errors are gone after I removed "-g" from the app > compile options. > > I tested again on the fresh Ubuntu 11.10 install: both 1.4.3 and 1.5.4 > compil

Re: [OMPI users] unresolvable R_X86_64_64 relocation against symbol `mpi_fortran_*

2011-09-28 Thread Dmitry N. Mikushin
-bit. - D. 2011/9/24 Jeff Squyres <jsquy...@cisco.com>: > Check the output from when you ran Open MPI's configure and "make all" -- did > it decide to build the F77 interface? > > Also check that gcc and gfortran output .o files of the same bitness / type. > > >

Re: [OMPI users] unresolvable R_X86_64_64 relocation against symbol `mpi_fortran_*

2011-09-24 Thread Dmitry N. Mikushin
quy...@cisco.com>: > Can you compile / link simple OMPI applications without this problem? > > On Sep 24, 2011, at 7:54 AM, Dmitry N. Mikushin wrote: > >> Hi Jeff, >> >> Today I've verified this application on the Feroda 15 x86_64, where >> I'm usually building OpenMPI fr

Re: [OMPI users] unresolvable R_X86_64_64 relocation against symbol `mpi_fortran_*

2011-09-24 Thread Dmitry N. Mikushin
nk itself? > > Try running "file" on the Open MPI libraries and/or your target application > .o files to see what their bitness is, etc. > > > On Sep 22, 2011, at 3:15 PM, Dmitry N. Mikushin wrote: > >> Hi Jeff, >> >> You're right because I also tri

Re: [OMPI users] unresolvable R_X86_64_64 relocation against symbol `mpi_fortran_*

2011-09-22 Thread Dmitry N. Mikushin
try to link > them together). > > Can you verify that everything was built with all the same 32/64? > > > On Sep 22, 2011, at 1:21 PM, Dmitry N. Mikushin wrote: > >> Hi, >> >> OpenMPI 1.5.4 compiled with gcc 4.6.1 and linked with target app gives >> a

Re: [OMPI users] unresolvable R_X86_64_64 relocation against symbol `mpi_fortran_*

2011-09-22 Thread Dmitry N. Mikushin
Same error when configured with --with-pic --with-gnu-ld 2011/9/22 Dmitry N. Mikushin <maemar...@gmail.com>: > Hi, > > OpenMPI 1.5.4 compiled with gcc 4.6.1 and linked with target app gives > a load of linker messages like this one: > > /usr/bin/ld: ../../lib/libuti

[OMPI users] unresolvable R_X86_64_64 relocation against symbol `mpi_fortran_*

2011-09-22 Thread Dmitry N. Mikushin
Hi, OpenMPI 1.5.4 compiled with gcc 4.6.1 and linked with target app gives a load of linker messages like this one: /usr/bin/ld: ../../lib/libutil.a(parallel_utilities.o)(.debug_info+0x529d): unresolvable R_X86_64_64 relocation against symbol `mpi_fortran_argv_null_ There are a lot of similar

Re: [OMPI users] Compiling both 32-bit and 64-bit?

2011-08-24 Thread Dmitry N. Mikushin
information. link: invalid option -- 'd' Try `link --help' for more information. configure: error: unknown naming convention: 2011/8/24 Barrett, Brian W <bwba...@sandia.gov>: > On 8/24/11 11:29 AM, "Dmitry N. Mikushin" <maemar...@gmail.com> wrote: > >>Quick

[OMPI users] Compiling both 32-bit and 64-bit?

2011-08-24 Thread Dmitry N. Mikushin
Hi, Quick question: is there an easy switch to compile and install both 32-bit and 64-bit OpenMPI libraries into a single tree? E.g. 64-bit in /prefix/lib64 and 32-bit in /prefix/lib. Thanks, - D.

Re: [OMPI users] Error installing OpenMPI 1.5.3

2011-07-10 Thread Dmitry N. Mikushin
Sorry, disregard this, the issue was created by my own buggy compiler wrapper. - D. 2011/7/10 Dmitry N. Mikushin <maemar...@gmail.com>: > Hi, > > Maybe it would be useful to report the openmpi 1.5.3 archive currently > has a strange issue when installing on Fedora

[OMPI users] Error installing OpenMPI 1.5.3

2011-07-10 Thread Dmitry N. Mikushin
Hi, Maybe it would be useful to report the openmpi 1.5.3 archive currently has a strange issue when installing on Fedora 15 x86_64 (gcc 4.6), that *does not* happen with 1.4.3: $ ../configure --prefix=/opt/openmpi_kgen-1.5.3 CC=gcc CXX=g++ F77=gfortran FC=gfortran ... $ sudo make install ...

Re: [OMPI users] mpif90 compiler non-functional

2011-06-22 Thread Dmitry N. Mikushin
/ >> Fortran support, or >> >> b) when you built/installed Open MPI, it couldn't find a working C++ / >> Fortran compiler, so it skipped building support for them. >> >> >> >> On Jun 22, 2011, at 12:05 PM, Dmitry N. Mikushin wrote: >> >>>

Re: [OMPI users] mpif90 compiler non-functional

2011-06-22 Thread Dmitry N. Mikushin
y debugging support: no >         libltdl support: yes >   Heterogeneous support: no >  mpirun default --prefix: no >         MPI I/O support: yes >       MPI_WTIME support: gettimeofday > Symbol visibility support: yes >  .. > > > On Wed, Jun 22, 2011 at 12:34 PM, Dmitry

Re: [OMPI users] mpif90 compiler non-functional

2011-06-22 Thread Dmitry N. Mikushin
Alexandre, Did you have a working Fortran compiler in system in time of OpenMPI compilation? To my experience Fortran bindings are always compiled by default. How did you configured it and have you noticed any messages reg. Fortran support in configure output? - D. 2011/6/22 Alexandre Souza

Re: [OMPI users] USE mpi

2011-05-08 Thread Dmitry N. Mikushin
t may be different - > it may not inherit everything from your environment. > > I advised the user to "sudo -s" and ten setup the compiler environment and > then run make install. > > Sent from my phone. No type good. > > On May 7, 2011, at 9:37 PM, "Dmitry N. Miku

Re: [OMPI users] USE mpi

2011-05-07 Thread Dmitry N. Mikushin
not; if ./configure CC=/full/path/to/icc, then both "make" and "make install" work. Nothing needs to be searched, icc is already in PATH, since compilevars are sourced in profile.d. Or am I missing something? Thanks, - D. 2011/5/8 Tim Prince <n...@aol.com>: > On 5/7/2011 2:

Re: [OMPI users] USE mpi

2011-05-07 Thread Dmitry N. Mikushin
> didn't find the icc compiler Jeff, on 1.4.3 I saw the same issue, even more generally: "make install" cannot find the compiler, if it is an alien compiler (i.e. not the default gcc) - same situation for intel or llvm, for example. The workaround is to specify full paths to compilers with CC=...

Re: [OMPI users] Help: HPL Problem

2011-05-07 Thread Dmitry N. Mikushin
Eric, You have a link-time error complaining about the absence of some libraries. At least two of them libm and libdl must be provided by system, not by MPI implementation. Could you locate them in /usr/lib64? Also it should be useful to figure out if the problem is global or specific to HPL: do

Re: [OMPI users] OpenMPI-PGI: /usr/bin/ld: Warning: size of symbol `#' changed from # in #.o to # in #.so

2011-03-27 Thread Dmitry N. Mikushin
I checked that this issue is not caused by using different compile options for different libraries. There is a set of libraries and executable compiled with mpif90, and this warning comes for executable's object and one of libraries... 2011/3/25 Dmitry N. Mikushin <maemar...@gmail.com>

[OMPI users] OpenMPI-PGI: /usr/bin/ld: Warning: size of symbol `#' changed from # in #.o to # in #.so

2011-03-24 Thread Dmitry N. Mikushin
Hi, I'm wondering if anybody have seen something similar, and have you succeeded to run your application compiled by openmpi-pgi-1.4.2 with the following warnings: /usr/bin/ld: Warning: size of symbol `mpi_fortran_errcodes_ignore_' changed from 4 in foo.o to 8 in lib/libfoolib2.so /usr/bin/ld: