Alex,
This is indeed quite strange. You're receiving an error about truncated data
during a barrier. The MPI_Barrier is the only MPI function that has a
synchronization meaning, and does not move data around, so I can hardly see how
this can generate a truncation.
You should put a breakpoint i
Oh how interesting and I hope this helps someone. Following another link, I
had to use:
./configure --prefix /usr --enable-shared --enable-static
when compiling this for Rmpi. Just curious, why isn't --enable-static a
default option?
~Ben
On Thu, Apr 5, 2012 at 7:59 PM, Benedict Holland <
bened
Hi,
First, I'm glad to say my MOSIX component is working and giving good
initial result. Thanks for all your help!
I'm not sure how (I know I should fill in some license agreement docs),
but I would like to contribute the code to the Open-MPI project.
Is there an official code-review process? a
So I am now back on this full time as I need this to work. OpenMPI 1.4.3 is
deadlocking with Rmpi and I need the latest code. I still get the exact
same problem. I configured it with a --prefix=/usr to get it to install
everything in default directories and added /usr/lib/openmpi to my
ldconfig. I
On Apr 5, 2012, at 1:26 PM, Josh Hursey wrote:
> All of the ones I am adding :) [most of the codes I am working with at
> the moment use the fortran interfaces]
You're killing me. :-)
> I think the 'example' extension is the only one that has f77
> interfaces at the moment. My off-trunk branche
All of the ones I am adding :) [most of the codes I am working with at
the moment use the fortran interfaces]
I think the 'example' extension is the only one that has f77
interfaces at the moment. My off-trunk branches have f77/f90
interfaces to their mpiext interfaces.
I'm willing to help test/d
On Apr 5, 2012, at 12:30 PM, Josh Hursey wrote:
> What is the state of 'mpiext' with this patch? From glancing at the
> branch it doesn't look like it has been touched.
Yeah, I've been thinking about this -- especially since you committed something
relevant to mpiext the other day.
You're right
Jeff,
What is the state of 'mpiext' with this patch? From glancing at the
branch it doesn't look like it has been touched.
-- Josh
On Thu, Apr 5, 2012 at 11:37 AM, Jeffrey Squyres wrote:
> WHAT: Revamp the entire MPI Fortran bindings; new "mpifort" wrapper compiler
>
> WHY: Much better mpi modu
WHAT: Revamp the entire MPI Fortran bindings; new "mpifort" wrapper compiler
WHY: Much better mpi module implementation; addition of MPI-3 mpi_f08 module
WHERE: Remove ompi/mpi/f77 and ompi/mpi/f90, replace with ompi/mpi/fortran
TIMEOUT: Teleconf, Tue Apr 17, 2012
==
My vote is for San Jose.
Sam
From: devel-boun...@open-mpi.org [devel-boun...@open-mpi.org] on behalf of Josh
Hursey [jjhur...@open-mpi.org]
Sent: Wednesday, April 04, 2012 5:14 AM
To: Open MPI Developers
Subject: Re: [OMPI devel] [EXTERNAL] Re: Developers
Ok, I'll leave it alone then. You may want to keep this in mind just in
case your merge with the trunk pollutes your bindings somehow.
--td
On 4/5/2012 8:45 AM, Jeffrey Squyres wrote:
I'm able to duplicate the problem, but I don't know if this is worth digging in
to.
The entire Fortran bind
I'm able to duplicate the problem, but I don't know if this is worth digging in
to.
The entire Fortran bindings will be replaced in about 2 weeks, and the problem
doesn't occur on my mpi3-fortran bitbucket.
On Apr 5, 2012, at 7:03 AM, TERRY DONTJE wrote:
> I noticed both IU and Oracle are se
I noticed both IU and Oracle are seeing failures on the trunk with Intel
test MPI_Keyval3_f. This was with r26237 and the last successful MTT
run of this test was r26232. I looked at the log and nothing popped out
at me. I'll try and narrow this down a little further but that won't be
until
13 matches
Mail list logo