This is an auto-replied message. I am out of office on vacation until 7/18. For
DReAM related questions, please contact dream-support...@oracle.com or my
manager Lillian (lillian.kvi...@oracle.com).
Thanks.
This is an auto-replied message. I am out of office on vacation until 7/18. For
DReAM related questions, please contact dream-support...@oracle.com or my
manager Lillian (lillian.kvi...@oracle.com).
Thanks.
aunch more than one MPI process / why would not that be
desirable?
Bests,
Thomas
Le 4 mai 2011 à 15:51, Tony Lam a écrit :
Hi,
I understand a single orted is shared by all MPI processes from the same
communicator on each execution host, does anyone see any problem that MPI/OMPI
may have pr
Hi,
I understand a single orted is shared by all MPI processes from the same
communicator on each execution host, does anyone see any problem that
MPI/OMPI may have problem with each process has its owner orted? My
guess it is less efficient in terms of MPI communication and memory foot
print
m API that tells OMPI what
hostnames to use).
Does that help?
On Feb 22, 2011, at 7:10 PM, Tony Lam wrote:
Hi,
I'm looking into supporting running OMPI jobs on our internal compute farms,
specially we'd like to schedule and launch the jobs under the control of an
internal res
Hi,
I'm looking into supporting running OMPI jobs on our internal compute
farms, specially we'd like to schedule and launch the jobs under the
control of an internal resource manager that we developed. My reading so
far indicated this can be achieved with some orted/plm plug-in
(preferred ove