Another approach, that I've seen used, is to insert a resource manager agent
between each Open MPI processes (be it runtime process or application process).
Of course, it depends on how you collect your resource usage / enforce the
resource limitation policy.
In the case I'm referring to, the a
In that case, why not just directly launch the processes without the orted? We
do it with slurm and even have the ability to do it with torque - so it could
be done.
See the orte/mca/ess/slurmd component for an example of how to do so.
On May 4, 2011, at 4:55 PM, Tony Lam wrote:
> Hi Thomas,
Hi Thomas,
We need to track job resource usage in our resource manager for
accounting and resource policy enforcement, sharing single orted
process in multiple jobs makes the tracking much complicated. We don't
enforce other restrictions, and I'll appreciate any suggestion on how
to resolve this
Hi,
Could you explain why you would like one orted on top of each MPI process?
There are some situations, like resource usage limitation / accounting, that
are possible to solve without changing the one daemon per node deployment.
Or do you enforce other kinds of restrictions on the orted process
On May 4, 2011, at 1:51 PM, Tony Lam wrote:
> Hi,
>
> I understand a single orted is shared by all MPI processes from the same
> communicator on each execution host, does anyone see any problem that
> MPI/OMPI may have problem with each process has its owner orted? My guess it
> is less effic
Hi,
I understand a single orted is shared by all MPI processes from the same
communicator on each execution host, does anyone see any problem that
MPI/OMPI may have problem with each process has its owner orted? My
guess it is less efficient in terms of MPI communication and memory foot
print