I agree with the goal - we'll have to work this out at a later time. One key
will be maintaining a memory-efficient mapping of opal_identifier to an RTE
identifier, which typically requires some notion of launch grouping and rank
within that grouping.
On Jul 23, 2014, at 7:36 PM, George Bosilc
A BTL should be completely agnostic to the notions of vpid and jobid.
Unfortunately, as you mentioned, some of the BTLs are relying on this
information in diverses ways.
- If they rely for output purposes, this is a trivial matter as a BTL is
supposed to rely upward any error and some upper la
Sounds reasonable. However, keep in mind that some BTLs actually require the
notion of a jobid and rank-within-that-job. If the current ones don't, I assure
you that at least one off-trunk one definitely does
Some of the MTL's, of course, definitely rely on those fields.
On Jul 23, 2014, at 7:
Sweet; I'll have a look at all of that -- thanks.
On Jul 23, 2014, at 10:15 PM, George Bosilca wrote:
> I was struggling with a similar issue while trying to fix the OpenIB
> compilation. And I choose to implement a different approach, which does not
> require knowledge of what’s inside opal_p
I was struggling with a similar issue while trying to fix the OpenIB
compilation. And I choose to implement a different approach, which does not
require knowledge of what’s inside opal_process_name_t.
Look in opal/util/proc.h. You should be able to use: opal_process_name_vpid and
opal_process_n
Ralph and I chatted in IM.
For the moment, I'm masking off the lower 32 bits to get the VPID, the
uppermost 16 as the job family, and the next 16 as the sub-family.
If George makes the name be a handle with accessors to get the parts, we can
switch to using that.
On Jul 23, 2014, at 9:57 PM,
You should be able to memcpy it to an ompi_process_name_t and then extract it
as usual
On Jul 23, 2014, at 6:51 PM, Jeff Squyres (jsquyres) wrote:
> George --
>
> Is there a way to get the MPI_COMM_WORLD rank of an opal_process_name_t?
>
> I am currently outputting some information about pee
George --
Is there a way to get the MPI_COMM_WORLD rank of an opal_process_name_t?
I am currently outputting some information about peer processes in the usnic
BTL to include the peer's VPID, which is the MCW rank. I'll be sad if that
goes away...
On Jul 15, 2014, at 2:06 AM, George Bosilca
Usual place:
http://www.open-mpi.org/software/ompi/v1.8/
Please test and report problems by Wed July 30
Thanks
Ralph
My understanding is that both of these clauses are based on the fact that
there are ongoing communications between two processes when one of them
decide to shut down. From an MPI perspective, I can hardly see a case where
this is legit.
George.
On Wed, Jul 23, 2014 at 8:33 AM, Yossi Etigin w
On Jul 23, 2014, at 11:02 AM, Ralph Castain wrote:
> Just a little confusing here as some of these folks have changed
> organizations, some of the orgs have dropped out of sight, etc. So it isn't
> entirely clear who owns what on your chart.
Note that the chart I included in the email is pulle
Done.
On Wed, Jul 23, 2014 at 11:02 AM, Ralph Castain wrote:
> Just a little confusing here as some of these folks have changed
> organizations, some of the orgs have dropped out of sight, etc. So it isn't
> entirely clear who owns what on your chart.
>
> Looking at your lists, it is full of pe
Just a little confusing here as some of these folks have changed organizations,
some of the orgs have dropped out of sight, etc. So it isn't entirely clear who
owns what on your chart.
Looking at your lists, it is full of people who haven't been involved with OMPI
for many years. So I'd say you
It is that time again -- it's time to clean house of SVN write access accounts.
SHORT VERSION
=
Edit the wiki to preserve your organization's SVN accounts by COB, Thursday,
July 31, 2014:
https://svn.open-mpi.org/trac/ompi/wiki/2014-SVN-summer-cleaning
If you don't indicate whi
Are you sure something isn't stale? I.e., did you do a fresh checkout since
the last build, or a "git clean", or something?
On Jul 23, 2014, at 10:02 AM, Mike Dubman wrote:
> nope, we use git.
> it passed on rhel 6.x, failed on ubuntu/debian/fedora and rhel 7.x
>
>
> On Wed, Jul 23, 2014 at
nope, we use git.
it passed on rhel 6.x, failed on ubuntu/debian/fedora and rhel 7.x
On Wed, Jul 23, 2014 at 4:03 PM, Jeff Squyres (jsquyres) wrote:
> Mike --
>
> Are you having the same jenkins problem we ran into yesterday? If so,
> it's a simple fix:
>
> http://www.open-mpi.org/communit
Noticed by the VT guys.
On 07/23/2014 03:01 PM, Mike Dubman wrote:
CC libvt_mpi_la-vt_iowrap_helper.lo
CC libvt_mpi_la-vt_libwrap.lo
CC libvt_mpi_la-vt_mallocwrap.lo
CC libvt_mpi_la-vt_mpifile.lo
make[6]: Entering directory
'/var/tmp/OFED_topdir/BUILD/openmpi-
Mike --
Are you having the same jenkins problem we ran into yesterday? If so, it's a
simple fix:
http://www.open-mpi.org/community/lists/devel/2014/07/15211.php
On Jul 23, 2014, at 9:01 AM, Mike Dubman wrote:
>
> CC libvt_mpi_la-vt_iowrap_helper.lo
> CC libvt_mpi_la-vt_
CC libvt_mpi_la-vt_iowrap_helper.lo
CC libvt_mpi_la-vt_libwrap.lo
CC libvt_mpi_la-vt_mallocwrap.lo
CC libvt_mpi_la-vt_mpifile.lo
make[6]: Entering directory
'/var/tmp/OFED_topdir/BUILD/openmpi-1.8.2rc2/ompi/contrib/vt/vt/tools/vtunify/mpi'
ln -s
/var/tmp/OFED_topdir/
1. If the barrier is before del_proc, it does guarantee all MPI calls
have been completed by all other ranks, but it does not guarantee all ACKs have
been delivered. For MXM, closing the connection (del_procs call completed)
guarantees that my rank got all ACKs. So we need a barrier betwee
20 matches
Mail list logo