Hi Ralph,
Sorry for the late reply, something along the lines of "swamped" ;-)
> On 03 Sep 2015, at 16:04 , Ralph Castain wrote:
> The purpose of orte_max_vm_size is to subdivide the allocation - i.e., for a
> given mpirun execution, you can specify to only use a certain number of the
> alloca
Alexey,
There is a conceptual different between GET and WAIT: one can return NULL
while the other cannot. If you want a solution with do {} while, I think
the best place is specifically in the PML OB1 recv functions (around the
OMPI_FREE_LIST_GET_MT) and not inside the OMPI_FREE_LIST_GET_MT macro
Ralph is the guy who needs to answer this for you -- he's on travel at the
moment; his response may be a little delayed...
> On Sep 16, 2015, at 4:17 AM, Kay Khandan (Hamed) wrote:
>
> Hello everyone,
>
> My name is Kay. I’m a huge "oom-pi" fan, but only recently have been looking
> at from
Hi,
Is there some technical reports/ papers to summarize the collective algorithms
used in OpenMPI?, such as MPI_barrier, MPI_bcast, and MPI_Alltoall?
Dahai
> On 17 Sep 2015, at 20:48 , Ralph Castain wrote:
> Might not - there has been a very large amount of change over the last few
> months, and I confess I haven't been checking the DVM regularly. So let me
> take a step back and look at that code.
Ok.
> I'll also include the extensions you requ
Might not - there has been a very large amount of change over the last few
months, and I confess I haven't been checking the DVM regularly. So let me
take a step back and look at that code.
I'll also include the extensions you requested on the other email - I
didn't forget them, just somewhat over
> On 17 Sep 2015, at 20:34 , Ralph Castain wrote:
>
> Ouch - this is on current master HEAD?
Yep!
> I'm on travel right now, but I'll be back Fri evening and can look at it this
> weekend. Probably something silly that needs to be fixed.
Thanks!
Obviously I didn't check every single version
Ouch - this is on current master HEAD? I'm on travel right now, but I'll be
back Fri evening and can look at it this weekend. Probably something silly
that needs to be fixed.
On Thu, Sep 17, 2015 at 11:30 AM, Mark Santcroos wrote:
> Hi (Ralph),
>
> Over the last months I have been focussing on
Hi (Ralph),
Over the last months I have been focussing on exec throughput, and not so much
on the application payload (read: mainly using /bin/sleep ;-)
As things are stabilising now, I returned my attention to "real" applications.
To discover that launching MPI applications (build with the same
On Sep 16, 2015, at 12:02 PM, George Bosilca wrote:
>
> ./opal/mca/btl/usnic/btl_usnic_compat.h:161:OMPI_FREE_LIST_GET_MT(list,
> (item))
FWIW: This one exists because we use the same usnic BTL code between master and
v1.8/v1.10. We have some configury that figures out in which tree the u
No, it was not. Will fix.
-Nathan
On Wed, Sep 16, 2015 at 07:26:58PM -0700, Ralph Castain wrote:
>Yes - Nathan made some changes related to the add_procs code. I doubt that
>configure option was checked...
>On Wed, Sep 16, 2015 at 7:13 PM, Jeff Squyres (jsquyres)
> wrote:
>
>
George,
Thank you for response.
In my opinion our solution with do/while() loop in OMPI_FREE_LIST_GET_MT
is better for our MPI+OpenMP hybrid application than using
OMPI_FREE_LIST_WAIT_MT.
Because in case OMPI_FREE_LIST_WAIT_MT MPI_Irecv() will be suspended in
opal_progress() until one of MPI_
12 matches
Mail list logo