rsion of libtool that we're using in the OMPI
> 1.6 series still checks for C, C++, and Fortran, even if the project
> doesn't use C++ or Fortran (this has been fixed in later versions of
> libtool).
>
> Can you either uninstall your borked gfortran, install a proper/wo
Do you want me to pull a 1.6.3 out of subversion and try
it?
Mark
>
>
> On Feb 11, 2013, at 10:03 PM, Mark Bolstad
> wrote:
>
> > I packed the compile info as requested but the message is to big.
> Changing the compression didn't help. I can split it, or do you just
I packed the compile info as requested but the message is to big. Changing
the compression didn't help. I can split it, or do you just want to approve
it out of the hold queue?
Mark
On Mon, Feb 11, 2013 at 3:03 PM, Jeff Squyres (jsquyres) wrote:
> On Feb 11, 2013, at 2:46 PM, Mark
do the shared objects get created in the build cycle?
Mark
On Mon, Feb 11, 2013 at 1:35 PM, Jeff Squyres (jsquyres) wrote:
> Ah -- your plugins are all .a files.
>
> How did you configure/build Open MPI?
>
>
> On Feb 11, 2013, at 11:09 AM, Mark Bolstad
> wrote:
>
> > I
think of why that would happen offhand. I build
> and run all the time on ML with no problems. Can you deleted that plugin
> and run ok?
>
> Sent from my phone. No type good.
>
> On Feb 10, 2013, at 10:22 PM, "Mark Bolstad"
> wrote:
>
> > I having some difficul
I having some difficulties with building/running 1.6.3 on Mountain Lion
(10.8.2). I build with no errors and install into a prefix directory. I get
the following errors:
...
[bolstadm-lm3:38486] mca: base: component_find: unable to open
/Users/bolstadm/papillon/build/macosx-x86_64/Release/openmpi-1
rograms via terminal, but if I try to run them in
> Xcode I get an error because it cannot find MPI.h; even if i do a
>
> #import
>
> I suspect that it should work but I am probably missing somethinghow
> are you able to use MPI on Xcode? Gotta change some build settings?
&g
You may want to see if you have MacPorts installed. Typically (but not
always), /opt/local is from a MacPorts installation. If it is then it's
very easy to remove mpich and install openmpi.
To check for MacPorts, see if /opt/local/bin/port exists. Then,
sudo port uninstall --follow-dependencies m
Some additional data:
Without threads it still hangs, similar behavior as before.
All of the tests were run on a system running FC11 with X5550 processors.
I just reran on a node of a RHEL 5.3 cluster with E5530 processors (dual
Nehalam):
- openmpi 1.3.4 and gcc 4.1.2
- No issues: connecti
Just a quick interjection, I also have a dual-quad Nehalem system, HT on,
24GB ram, hand compiled 1.3.4 with options: --enable-mpi-threads
--enable-mpi-f77=no --with-openib=no
With v1.3.4 I see roughly the same behavior, hello, ring work, connectivity
fails randomly with np >= 8. Turning on -v inc
Thanks, that at least explains what is going on. Because I have an
unbalanced work load (at least for now) I assume that I'll need to poll. If
I replace the compositor loop with the following, it appears that I prevent
the serialization/starvation and service the servers equally. I can think of
edg
Thanks, but that won't help. In the real application the messages are at
least 25,000 bytes long, mostly much larger.
Thanks,
Mark
On Fri, Jun 19, 2009 at 1:17 PM, Eugene Loh wrote:
> Mark Bolstad wrote:
>
> I have a small test code that I've managed to duplicate the resul
ion loop*/
usleep( (unsigned long)(50 * drand48()) );
}
/* Clean up */
for (i = 0; i < BUFFERS; i++)
{
if ( sent[ i ] )
{
sent[ i ] = 0;
MPI_Wait( request[ i ], &status );
}
free( request[ i ] );
free( buffer[ i
I have a small test code that I've managed to duplicate the results from a
larger code. In essence, using the sm btl with ISend, I wind up with the
communication being completely serialized, i.e., all the calls from process
1 complete, then all from 2, ...
This is version 1.3.2, vanilla compile. I
14 matches
Mail list logo