I get the following error while "make install":
make[2]: Entering directory `/home_local/glebn/build_dbg/ompi/contrib/vt'
Making install in vt
make[3]: Entering directory `/home_local/glebn/build_dbg/ompi/contrib/vt/vt'
make[3]: *** No rule to make target `install'. Stop.
make[3]: Leaving directo
Greetings, MPI mavens,
Perhaps this belongs on users@, but since it's about development status
I thought I start here. I've fairly recently gotten involved in getting
an MPI environment configured for our institute. We have an existing
LSF cluster because most of our work is more High-Throug
This problem should be fixed now.
Thanks for the hint.
On Sa, 2008-02-09 at 08:47 -0500, Jeff Squyres wrote:
> While doing some pathscale compiler testing on the trunk (r17407), I
> ran into this compile problem (the first is a warning, the second is
> an error):
>
> pathCC -DHAVE_CONFIG_H -
I've been noticing another problem with the VT integration. If you do
a "./configure --enable-contrib-no-build=vt" a subsequent 'make
distclean' will fail in contrib/vt. The 'make distclean' will succeed
with VT enabled (default).
---
Making distclean in contrib/vt
make
Hello!
I don't know if it is the good method to have some help for developing
with open mpi.
We are 4 french students and we have a project : we have to write a
new driver (new btl) between openmpi and newmadeleine (see the web
page, http://pm2.gforge.inria.fr/newmadeleine/doc/html/) With
* Josh Hursey wrote on Mon, Feb 11, 2008 at 07:31:25PM CET:
> I've been noticing another problem with the VT integration. If you do
> a "./configure --enable-contrib-no-build=vt" a subsequent 'make
> distclean' will fail in contrib/vt. The 'make distclean' will succeed
> with VT enabled (defa
Hello,
please apply this patch, to make future contrib integration just a tad
bit easier. I verified that the generated configure script is
identical, minus whitespace and comments.
Cheers,
Ralf
2008-02-11 Ralf Wildenhues
* config/ompi_contrib.m4 (OMPI_CONTRIB): Unify listings of
Hi,
Since I upgraded to MacOS X 10.5.1, I've been having problems running
MPI programs (using both 1.2.4 and 1.2.5). The symptoms are
intermittent (i.e. sometimes the application runs fine), and appear as
follows:
1. One or more of the application processes die (I've see both one and
tw
Cedric,
There is not much documentation about this subject. However, we have
some templates. Look in ompi/mca/btl/template to see how a new driver
is supposed to be written.
I have a question. As far as I understand about New Madelaine it
already support multi devices, so I guess the matc
All:
The latest scrub of the 1.3 release schedule and contents is ready for
review and comment. Please use the following links:
1.3 milestones:
https://svn.open-mpi.org/trac/ompi/milestone/Open%20MPI%201.3
1.3.1 milestones:
https://svn.open-mpi.org/trac/ompi/milestone/Open%20MPI%201.3.1
In o
Out of curiousity, why is one-sided rdma component struck from 1.3? As
far as I'm aware, the code is in the trunk and ready for release.
Brian
On Mon, 11 Feb 2008, Brad Benton wrote:
All:
The latest scrub of the 1.3 release schedule and contents is ready for
review and comment. Please use
Yo Brian
The line through that item means it has already been completed and is ready
to go.
There should also be a line through item 1.3.a.vi - it has also been fixed.
On 2/11/08 8:29 PM, "Brian W. Barrett" wrote:
> Out of curiousity, why is one-sided rdma component struck from 1.3? As
> far
There is a known problem with Leopard and Open MPI of all versions. We
haven't had time to chase it down yet - probably still a few weeks away.
Ralph
On 2/11/08 1:39 PM, "Greg Watson" wrote:
> Hi,
>
> Since I upgraded to MacOS X 10.5.1, I've been having problems running
> MPI programs (using
Jeff and I chatted about this today, in fact. We know the LSF support is
borked, but neither of us had time right now to fix it. We plan to do so,
though, before the 1.3 release - just can't promise when.
Ralph
On 2/11/08 8:00 AM, "Eric Jones" wrote:
> Greetings, MPI mavens,
>
> Perhaps this
Hello all
Per last week's telecon, we planned the merge of the latest ORTE devel
branch to the OMPI trunk for after Sun had committed its C++ changes. That
happened over the weekend.
Therefore, based on the requests at the telecon, I will be merging the
current ORTE devel branch to the trunk on W
15 matches
Mail list logo