Hello,
I think there a few things still missing in openmpi pmi2 to make it work
with slurm. We are the ones at Bull who integrated the pmi2 code from
mpich2 to slurm. The attached patch should fix the issue (call slurm
with --mpi=pmi2). This still needs to be checked with other pmi2
implemena
On Jul 17, 2013, at 20:15 , "Jeff Squyres (jsquyres)"
wrote:
> On Jul 17, 2013, at 12:16 PM, Nathan Hjelm wrote:
>
>> As Ralph suggested you need to pass the --level or -l option to see all the
>> variables. --level 9 will print everything. If you think there are variables
>> everyday users
On Jul 18, 2013, at 5:46 AM, George Bosilca wrote:
> On Jul 17, 2013, at 20:15 , "Jeff Squyres (jsquyres)"
> wrote:
>
>> On Jul 17, 2013, at 12:16 PM, Nathan Hjelm wrote:
>>
>>> As Ralph suggested you need to pass the --level or -l option to see all the
>>> variables. --level 9 will print
Thanks Piotr - I'll apply that and move it to the 1.7 branch.
Some of us are trying to test the pmi2 support in 2.6.0 and hitting a problem.
We have verified that the pmi2 support was built/installed, and that both
slurmctld and slurmd are at 2.6.0 level. When we run "srun --mpi-list", we get:
On Jul 18, 2013, at 8:06 AM, Ralph Castain wrote:
> That's a good point, and a bad behavior. IIRC, it results from the MPI
> Forum's adoption of the MPI-T requirement that stipulates we must allow
> access to all control and performance variables at startup so they can be
> externally seen/man
On Jul 18, 2013, at 15:06 , Ralph Castain wrote:
I think ompi_info has always shown all the variables despite what you have
the selection variable set (at least in some cases). We now just display
everything in all cases. An additional benefit to the updated code is that
i
On Jul 18, 2013, at 7:05 AM, David Goodell (dgoodell)
wrote:
> On Jul 18, 2013, at 8:06 AM, Ralph Castain wrote:
>
>> That's a good point, and a bad behavior. IIRC, it results from the MPI
>> Forum's adoption of the MPI-T requirement that stipulates we must allow
>> access to all control an
On Thu, Jul 18, 2013 at 07:53:35AM -0700, Ralph Castain wrote:
>
> On Jul 18, 2013, at 7:05 AM, David Goodell (dgoodell)
> wrote:
>
> > On Jul 18, 2013, at 8:06 AM, Ralph Castain wrote:
> >
> >> That's a good point, and a bad behavior. IIRC, it results from the MPI
> >> Forum's adoption of t
Hello,
Could someone, who is more familiar with the architecture of the sm BTL,
comment on the technical feasibility of the following: is it possible to
easily extend the BTL (i.e. without having to rewrite it completely from
scratch) so as to be able to perform transfers using both KNEM (or oth
On Jul 18, 2013, at 9:53 AM, Ralph Castain wrote:
> On Jul 18, 2013, at 7:05 AM, David Goodell (dgoodell)
> wrote:
>
>> On Jul 18, 2013, at 8:06 AM, Ralph Castain wrote:
>>
>>> That's a good point, and a bad behavior. IIRC, it results from the MPI
>>> Forum's adoption of the MPI-T requireme
On Jul 18, 2013, at 8:17 AM, "David Goodell (dgoodell)"
wrote:
> On Jul 18, 2013, at 9:53 AM, Ralph Castain wrote:
>
>> On Jul 18, 2013, at 7:05 AM, David Goodell (dgoodell)
>> wrote:
>>
>>> On Jul 18, 2013, at 8:06 AM, Ralph Castain wrote:
>>>
That's a good point, and a bad behavio
On Jul 18, 2013, at 17:12 , "Iliev, Hristo" wrote:
> Hello,
>
> Could someone, who is more familiar with the architecture of the sm BTL,
> comment on the technical feasibility of the following: is it possible to
> easily extend the BTL (i.e. without having to rewrite it completely from
> sc
On Thu, Jul 18, 2013 at 08:33:37AM -0700, Ralph Castain wrote:
>
> On Jul 18, 2013, at 8:17 AM, "David Goodell (dgoodell)"
> wrote:
>
> > On Jul 18, 2013, at 9:53 AM, Ralph Castain wrote:
> >
> >> On Jul 18, 2013, at 7:05 AM, David Goodell (dgoodell)
> >> wrote:
> >>
> >>> On Jul 18, 2013,
On Jul 18, 2013, at 17:07 , Nathan Hjelm wrote:
> This was discussed in depth before the MCA rewrite came into the trunk. There
> are only two cases where we load and register all the available components:
> ompi_info, and MPI_T_init_thread(). The normal MPI case does not have this
> behavior
Le 18 juil. 2013 à 11:12, "Iliev, Hristo" a écrit :
> Hello,
>
> Could someone, who is more familiar with the architecture of the sm BTL,
> comment on the technical feasibility of the following: is it possible to
> easily extend the BTL (i.e. without having to rewrite it completely from
> s
On Thu, Jul 18, 2013 at 05:50:40PM +0200, George Bosilca wrote:
> On Jul 18, 2013, at 17:07 , Nathan Hjelm wrote:
>
> > This was discussed in depth before the MCA rewrite came into the trunk.
> > There are only two cases where we load and register all the available
> > components: ompi_info, an
On Jul 18, 2013, at 11:50 AM, George Bosilca wrote:
> How is this part of the code validated? It might capitalize on some type of
> "trust". Unfortunately … I have no such notion.
Not sure what you're asking here.
> I would rather take the path of the "least astonishment", a __consistent__
>
What: Change the ompi_proc_t endpoint data lookup to be more flexible
Why: As collectives and one-sided components are using transports
directly, an old problem of endpoint tracking is resurfacing. We need a
fix that doesn't suck.
When: Assuming there are no major objections, I'll start writing
+1, but I helped come up with the idea. :-)
On Jul 18, 2013, at 5:32 PM, "Barrett, Brian W" wrote:
> What: Change the ompi_proc_t endpoint data lookup to be more flexible
>
> Why: As collectives and one-sided components are using transports
> directly, an old problem of endpoint tracking is r
+1, though I do have a question.
We are looking at exascale requirements, and one of the big issues is memory
footprint. We currently retrieve the endpoint info for every process in the
job, plus all the procs in any communicator with which we do a connect/accept -
even though we probably will
On 7/18/13 7:39 PM, "Ralph Castain"
mailto:r...@open-mpi.org>> wrote:
We are looking at exascale requirements, and one of the big issues is memory
footprint. We currently retrieve the endpoint info for every process in the
job, plus all the procs in any communicator with which we do a connect/a
21 matches
Mail list logo