Re: [OMPI users] mpi_wtime implementation

2014-11-17 Thread Daniels, Marcus G
On Mon, 2014-11-17 at 17:31 +, Dave Love wrote: > I discovered from looking at the mpiP profiler that OMPI always uses > gettimeofday rather than clock_gettime to implement mpi_wtime on > GNU/Linux, and that looks sub-optimal. It can be very expensive in practice, especially for codes that

Re: [OMPI users] GCC 4.9 and MPI_F08?

2014-08-14 Thread Daniels, Marcus G
-Original Message- From: Jeff Squyres (jsquyres) [mailto:jsquy...@cisco.com] Sent: Wednesday, August 13, 2014 10:00 AM To: Open MPI User's List Cc: Daniels, Marcus G Subject: Re: [OMPI users] GCC 4.9 and MPI_F08? Marcus -- The fix was applied yesterday to the v1.8 branch. Would you mind testing

Re: [OMPI users] GCC 4.9 and MPI_F08?

2014-08-12 Thread Daniels, Marcus G
Hi Jeff, On Tue, 2014-08-12 at 16:18 +, Jeff Squyres (jsquyres) wrote: > Can you send the output from configure, the config.log file, and the > ompi_config.h file? Attached. configure.log comes from (./configure --prefix=/usr/projects/eap/tools/openmpi/1.8.2rc3 2>&1) > configure.log

Re: [OMPI users] GCC 4.9 and MPI_F08?

2014-08-12 Thread Daniels, Marcus G
On Tue, 2014-08-12 at 15:50 +, Jeff Squyres (jsquyres) wrote: > It should be in the 1.8.2rc tarball (i.e., to be included in the > soon-to-be-released 1.8.2). > > Want to give it a whirl before release to let us know if it works for you? > > http://www.open-mpi.org/software/ompi/v1.8/ >

[OMPI users] GCC 4.9 and MPI_F08?

2014-08-12 Thread Daniels, Marcus G
Hi, Looks like there is not the check yet for "GCC$ ATTRIBUTES NO_ARG_CHECK" -- a prerequisite for activating mpi_f08. Could it be added? https://bitbucket.org/jsquyres/mpi3-fortran/commits/243ffae9f63ffc8fcdfdc604796ef290963ea1c4 Marcus

Re: [OMPI users] MPI_Barrier hangs on second attempt but only when multiple hosts used.

2014-05-05 Thread Daniels, Marcus G
From: Clay Kirkland [mailto:clay.kirkl...@versityinc.com] Sent: Friday, May 02, 2014 03:24 PM To: us...@open-mpi.org Subject: [OMPI users] MPI_Barrier hangs on second attempt but only when multiple hosts used. I have been using MPI for many many years so I have very well

Re: [OMPI users] Support for CUDA and GPU-direct with OpenMPI 1.6.5 an 1.7.2

2013-07-09 Thread Daniels, Marcus G
The Intel MPI implementation does this. The performance between the accelerators and the host is poor though. About 20mb/sec in my ping/pong test. Intra-MIC communication is about a 1GB/sec whereas intra-host is about 6GB/sec. Latency is higher (i.e. worse) for the intra-MIC communication

Re: [OMPI users] Programming with Big Data in R

2013-02-26 Thread Daniels, Marcus G
Also, with regard to your subject line, there are a wide variety of options for connecting to data. Everything from `redis' (http://redis.io/, http://cran.r-project.org/web/packages/rredis/index.html) to HDF5 (http://cran.r-project.org/web/packages/hdf5/index.html), to memory mapped files

Re: [OMPI users] Programming with Big Data in R

2013-02-26 Thread Daniels, Marcus G
On Feb 26, 2013, at 12:17 PM, Ralph Castain wrote: > I have someone who is interested in knowing if anyone is currently working > with pbdR: > It looks to me like an evolution of the capabilities in the `snow' wrapper of `Rmpi', but the addition of the BLACS/PBLAS/ScaLAPACK interfaces data

Re: [OMPI users] openib_reg_mr

2012-06-15 Thread Daniels, Marcus G
On Jun 15, 2012, at 8:02 AM, Jeff Squyres wrote: > Were there any clues in /var/log/messages or dmesg? > Thanks. I found a suggestion from Nathan Hjelm to add "options mlx4_core log_mtts_per_seg=X" (where X is 5 in my case). Offline suggestions (which also included that) were also add

[OMPI users] openib_reg_mr

2012-06-09 Thread Daniels, Marcus G
Hi, Is there anything I can do about this? I don't have any locked memory limits. Thanks, Marcus Creating ensight file: EnSight6.geo01 elapsed secs= 6.84 -- The OpenFabrics (openib) BTL failed to register memory