Re: [OMPI users] Slides from the Open MPI SC'15 State of the Union BOF

2015-11-19 Thread Lev Givon
it in Firefox 42 and 3 other PDF viewers (on Linux, at least), all of the programs claimed that the file is either corrupted or misformatted. -- Lev Givon Bionet Group | Neurokernel Project http://lebedov.github.io/ http://neurokernel.github.io/

Re: [OMPI users] 1.10.1 appears to break mpi4py

2015-11-09 Thread Lev Givon
tter/gather mpi4py test errors are eliminated by the above patch. Thanks, -- Lev Givon Bionet Group | Neurokernel Project http://lebedov.github.io/ http://neurokernel.github.io/

Re: [OMPI users] 1.10.1 appears to break mpi4py

2015-11-09 Thread Lev Givon
egroups.com mailing list. -- Lev Givon Bionet Group | Neurokernel Project http://lebedov.github.io/ http://neurokernel.github.io/

Re: [OMPI users] reported number of processes emitting error much larger than number started/spawned by mpiexec?

2015-09-20 Thread Lev Givon
s correct. I'm already in communication with Rolf vandeVaart regarding the error [1]. Unfortunately, neither of us has made much headway finding the source of the problem as of the present time. [1] http://www.open-mpi.org/community/lists/users/2015/09/27526.php -- Lev Givon Bio

Re: [OMPI users] reported number of processes emitting error much larger than number started/spawned by mpiexec?

2015-09-20 Thread Lev Givon
Received from Ralph Castain on Sun, Sep 20, 2015 at 05:08:10PM EDT: > > On Sep 20, 2015, at 12:57 PM, Lev Givon wrote: > > > > While debugging a problem that is causing emission of a non-fatal OpenMPI > > error > > message to stderr, the error message is fol

[OMPI users] reported number of processes emitting error much larger than number started/spawned by mpiexec?

2015-09-20 Thread Lev Givon
: no, ORTE progress: yes, Event lib: yes) -- Lev Givon Bionet Group | Neurokernel Project http://www.columbia.edu/~lev/ http://lebedov.github.io/ http://neurokernel.github.io/

[OMPI users] tracking down what's causing a cuIpcOpenMemHandle error emitted by OpenMPI

2015-09-02 Thread Lev Givon
tu systems are 64-bit and have been kept up to date with the latest package updates. Any thoughts as to what could be causing the problem? -- Lev Givon Bionet Group | Neurokernel Project http://www.columbia.edu/~lev/ http://lebedov.github.io/ http://neurokernel.github.io/

Re: [OMPI users] cuIpcOpenMemHandle failure when using OpenMPI 1.8.5 with CUDA 7.0 and Multi-Process Service

2015-05-21 Thread Lev Givon
Received from Lev Givon on Thu, May 21, 2015 at 11:32:33AM EDT: > Received from Rolf vandeVaart on Wed, May 20, 2015 at 07:48:15AM EDT: > > (snip) > > > I see that you mentioned you are starting 4 MPS daemons. Are you following > > the instructions here? > > >

Re: [OMPI users] cuIpcOpenMemHandle failure when using OpenMPI 1.8.5 with CUDA 7.0 and Multi-Process Service

2015-05-21 Thread Lev Givon
emons as described in the aforementioned blog post? > Because of this question, we realized we need to update our documentation as > well. -- Lev Givon Bionet Group | Neurokernel Project http://www.columbia.edu/~lev/ http://lebedov.github.io/ http://neurokernel.github.io/

Re: [OMPI users] cuIpcOpenMemHandle failure when using OpenMPI 1.8.5 with CUDA 7.0 and Multi-Process Service

2015-05-19 Thread Lev Givon
Received from Rolf vandeVaart on Tue, May 19, 2015 at 08:28:46PM EDT: > >-Original Message- > >From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Lev Givon > >Sent: Tuesday, May 19, 2015 6:30 PM > >To: us...@open-mpi.org > >Subject: [OMPI users] c

Re: [OMPI users] cuIpcOpenMemHandle failure when using OpenMPI 1.8.5 with CUDA 7.0 and Multi-Process Service

2015-05-19 Thread Lev Givon
Received from Rolf vandeVaart on Tue, May 19, 2015 at 08:28:46PM EDT: > > >-Original Message- > >From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Lev Givon > >Sent: Tuesday, May 19, 2015 6:30 PM > >To: us...@open-mpi.org > >Subject: [OMPI users

[OMPI users] cuIpcOpenMemHandle failure when using OpenMPI 1.8.5 with CUDA 7.0 and Multi-Process Service

2015-05-19 Thread Lev Givon
at doesn't seem to have any effect upon the problem. Rebooting the machine also doesn't have any effect. I should also add that my program runs without any error if the groups of MPI processes talk directly to the GPUs instead of via MPS. Does anyone have any ideas as to what could be goi

Re: [OMPI users] getting OpenMPI 1.8.4 w/ CUDA to look for absolute path to libcuda.so.1

2015-04-29 Thread Lev Givon
Received from Rolf vandeVaart on Wed, Apr 29, 2015 at 11:14:15AM EDT: > > >-Original Message- > >From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Lev Givon > >Sent: Wednesday, April 29, 2015 10:54 AM > >To: us...@open-mpi.org > >Subject: [OM

[OMPI users] getting OpenMPI 1.8.4 w/ CUDA to look for absolute path to libcuda.so.1

2015-04-29 Thread Lev Givon
compiler wrappers to include -Wl,-rpath -Wl,/usr/lib/x86_64-linux-gnu), but that doesn't seem to help. -- Lev Givon Bionet Group | Neurokernel Project http://www.columbia.edu/~lev/ http://lebedov.github.io/ http://neurokernel.github.io/

Re: [OMPI users] parsability of ompi_info --parsable output

2015-04-08 Thread Lev Givon
a note of the suggestion here: https://github.com/open-mpi/ompi/issues/515 Thanks, -- Lev Givon Bionet Group | Neurokernel Project http://www.columbia.edu/~lev/ http://lebedov.github.io/ http://neurokernel.github.io/

Re: [OMPI users] parsability of ompi_info --parsable output

2015-04-08 Thread Lev Givon
Received from Ralph Castain on Wed, Apr 08, 2015 at 10:46:58AM EDT: > > > On Apr 8, 2015, at 7:23 AM, Lev Givon wrote: > > > > The output of ompi_info --parsable is somewhat difficult to parse > > programmatically because it doesn't escape or quote fields t

[OMPI users] parsability of ompi_info --parsable output

2015-04-08 Thread Lev Givon
gress: yes, Event lib: yes) Is there some way to facilitate machine parsing of the output of ompi_info without having to special-case those options/parameters whose data fields might contain colons ? If not, it would be nice to quote such fields in future releases of ompi_info. -- Lev Givon Bionet

Re: [OMPI users] segfault during MPI_Isend when transmitting GPU arrays between multiple GPUs

2015-03-29 Thread Lev Givon
Received from Rolf vandeVaart on Fri, Mar 27, 2015 at 04:09:58PM EDT: > >-Original Message- > >From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Lev Givon > >Sent: Friday, March 27, 2015 3:47 PM > >To: us...@open-mpi.org > >Subject: [OMPI users]

[OMPI users] segfault during MPI_Isend when transmitting GPU arrays between multiple GPUs

2015-03-27 Thread Lev Givon
on Ubuntu 14.04. -- Lev Givon Bionet Group | Neurokernel Project http://www.columbia.edu/~lev/ http://lebedov.github.io/ http://neurokernel.github.io/

Re: [OMPI users] compiling OpenMPI 1.8.4 on system with multiarched SLURM libs (Ubuntu 15.04 prerelease)

2015-03-25 Thread Lev Givon
Received from Ralph Castain on Wed, Mar 04, 2015 at 10:03:06AM EST: > > On Mar 3, 2015, at 9:41 AM, Lev Givon wrote: > > > > Received from Ralph Castain on Sun, Mar 01, 2015 at 10:31:15AM EST: > >>> On Feb 26, 2015, at 1:19 PM, Lev Givon wrote: > >>>

Re: [OMPI users] compiling OpenMPI 1.8.4 on system with multiarched SLURM libs (Ubuntu 15.04 prerelease)

2015-03-03 Thread Lev Givon
Received from Ralph Castain on Sun, Mar 01, 2015 at 10:31:15AM EST: > > On Feb 26, 2015, at 1:19 PM, Lev Givon wrote: > > > > Received from Ralph Castain on Thu, Feb 26, 2015 at 04:14:05PM EST: > >>> On Feb 26, 2015, at 1:07 PM, Lev Givon wrote: > >>>

Re: [OMPI users] compiling OpenMPI 1.8.4 on system with multiarched SLURM libs (Ubuntu 15.04 prerelease)

2015-02-26 Thread Lev Givon
Received from Ralph Castain on Thu, Feb 26, 2015 at 04:14:05PM EST: > > On Feb 26, 2015, at 1:07 PM, Lev Givon wrote: > > > > I recently tried to build OpenMPI 1.8.4 on a daily release of what will > > eventually become Ubuntu 15.04 (64-bit) with the --with-slurm and -

[OMPI users] compiling OpenMPI 1.8.4 on system with multiarched SLURM libs (Ubuntu 15.04 prerelease)

2015-02-26 Thread Lev Givon
multiarch location? -- Lev Givon Bionet Group | Neurokernel Project http://www.columbia.edu/~lev/ http://lebedov.github.io/ http://neurokernel.github.io/

[OMPI users] using MPI_Comm_spawn in OpenMPI 1.8.4 with SLURM

2015-02-26 Thread Lev Givon
possibly a more recent version than 2.6.5) to submit spawning OpenMPI jobs? If so, what might be causing the above error? -- Lev Givon Bionet Group | Neurokernel Project http://www.columbia.edu/~lev/ http://lebedov.github.io/ http://neurokernel.github.io/

Re: [OMPI users] another mpirun + xgrid question

2007-09-11 Thread Lev Givon
Received from Neeraj Chourasia on Mon, Sep 10, 2007 at 11:49:03PM EDT: > On Mon, 2007-09-10 at 15:35 -0400, Lev Givon wrote: > > When launching an MPI program with mpirun on an xgrid cluster, is > > there a way to cause the program being run to be temporarily copied to > > the

[OMPI users] another mpirun + xgrid question

2007-09-10 Thread Lev Givon
When launching an MPI program with mpirun on an xgrid cluster, is there a way to cause the program being run to be temporarily copied to the compute nodes in the cluster when executed (i.e., similar to what the xgrid command line tool does)? Or is it necessary to make the program being run availabl

Re: [OMPI users] running jobs on a remote XGrid cluster via mpirun

2007-09-04 Thread Lev Givon
Received from Brian Barrett on Tue, Aug 28, 2007 at 05:07:51PM EDT: > On Aug 28, 2007, at 10:59 AM, Lev Givon wrote: > > > Received from Brian Barrett on Tue, Aug 28, 2007 at 12:22:29PM EDT: > >> On Aug 27, 2007, at 3:14 PM, Lev Givon wrote: > >> > >>>

Re: [OMPI users] OpenMPI and Port Range

2007-08-30 Thread Lev Givon
Received from George Bosilca on Thu, Aug 30, 2007 at 07:42:52PM EDT: > I have a patch for this, but I never felt a real need for it, so I > never push it in the trunk. I'm not completely convinced that we need > it, except in some really strange situations (read grid). Why do you > need a por

Re: [OMPI users] OpenMPI and Port Range

2007-08-30 Thread Lev Givon
Received from Simon Hammond on Thu, Aug 30, 2007 at 12:31:15PM EDT: > Hi all, > > Is there anyway to specify the ports that OpenMPI can use? > > I'm using a TCP/IP network in a closed environment, only certain ports > can be used. > > Thanks, > > Si Hammond > University of Warwick > I don't be

Re: [OMPI users] running jobs on a remote XGrid cluster via mpirun

2007-08-28 Thread Lev Givon
Received from Brian Barrett on Tue, Aug 28, 2007 at 12:22:29PM EDT: > On Aug 27, 2007, at 3:14 PM, Lev Givon wrote: > > > I have OpenMPI 1.2.3 installed on an XGrid cluster and a separate Mac > > client that I am using to submit jobs to the head (controller) node of > > th

[OMPI users] running jobs on a remote XGrid cluster via mpirun

2007-08-27 Thread Lev Givon
I have OpenMPI 1.2.3 installed on an XGrid cluster and a separate Mac client that I am using to submit jobs to the head (controller) node of the cluster. The cluster's compute nodes are all connected to the head node via a private network and are not running any firewalls. When I try running jobs w

Re: [OMPI users] building static and shared OpenMPI libraries on MacOSX

2007-08-22 Thread Lev Givon
Received from Brian Barrett on Wed, Aug 22, 2007 at 10:50:09AM EDT: > On Aug 21, 2007, at 10:52 PM, Lev Givon wrote: > > > (Running ompi_info after installing the build confirms the absence of > > said components). My concern, unsurprisingly, is motivated by a desire > &

Re: [OMPI users] building static and shared OpenMPI libraries on MacOSX

2007-08-22 Thread Lev Givon
Received from Brian Barrett on Wed, Aug 22, 2007 at 12:05:32AM EDT: > On Aug 21, 2007, at 3:32 PM, Lev Givon wrote: > > > configure: WARNING: *** Shared libraries have been disabled (-- > > disable-shared) > > configure: WARNING: *** Building MCA components as DSOs &g

[OMPI users] building static and shared OpenMPI libraries on MacOSX

2007-08-21 Thread Lev Givon
According to the OpenMPI FAQ, specifying the config option --enable-static without specifying --disable-shared should build both shared and static versions of the libraries. When I tried these options on MacOSX 10.4.10 with OpenMPI 1.2.3, however, the following lines in the config output seem to im