Re: [OMPI users] torque integration when tm ras/plm isn't compiled in.
OMPI_MCA_orte_leave_session_attached=1 Note: this does set limits on scale, though, if the system uses an ssh launcher. There are system limits on the number of open ssh sessions you can have at any one time. For all other launchers, no limit issues exist that I know about. HTH Ralph On Oct 22, 2009, at 5:18 PM, Roy Dragseth wrote: On Friday 23 October 2009 00:50:00 Ralph Castain wrote: Why not just setenv OMPI_MCA_orte_default_hostfile $PBS_NODEFILE assuming you are using 1.3.x, of course. If not, then you can use the equivalent for 1.2 - ompi_info would tell you the name of it. THANKS! Just what I was looking for. Been looking up and down for it, but couldn't find the right swear words. Is it also possible to disable the backgrounding of the orted daemons? When they fork into background one looses the feedback about cpu usage in the job. Not really a big issue though... Regards, r. -- The Computer Center, University of Tromsø, N-9037 TROMSØ Norway. phone:+47 77 64 41 07, fax:+47 77 64 41 00 Roy Dragseth, Team Leader, High Performance Computing Direct call: +47 77 64 62 56. email: roy.drags...@uit.no ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users
Re: [OMPI users] torque integration when tm ras/plm isn't compiled in.
On Friday 23 October 2009 00:50:00 Ralph Castain wrote: > Why not just > > setenv OMPI_MCA_orte_default_hostfile $PBS_NODEFILE > > assuming you are using 1.3.x, of course. > > If not, then you can use the equivalent for 1.2 - ompi_info would tell > you the name of it. THANKS! Just what I was looking for. Been looking up and down for it, but couldn't find the right swear words. Is it also possible to disable the backgrounding of the orted daemons? When they fork into background one looses the feedback about cpu usage in the job. Not really a big issue though... Regards, r. -- The Computer Center, University of Tromsø, N-9037 TROMSØ Norway. phone:+47 77 64 41 07, fax:+47 77 64 41 00 Roy Dragseth, Team Leader, High Performance Computing Direct call: +47 77 64 62 56. email: roy.drags...@uit.no
Re: [OMPI users] torque integration when tm ras/plm isn't compiled in.
Why not just setenv OMPI_MCA_orte_default_hostfile $PBS_NODEFILE assuming you are using 1.3.x, of course. If not, then you can use the equivalent for 1.2 - ompi_info would tell you the name of it. On Oct 22, 2009, at 4:29 PM, Roy Dragseth wrote: Hi all. I'm trying to create a tight integration between torque and openmpi for cases where the tm ras and plm isn't compiled into openmpi. This scenario is common for linux distros that ship openmpi. Of course the ideal solution is to recompile openmpi with torque support, but this isn't always feasible since I do not want to support my own version of openmpi on the stuff I'm distributing to others. We also see some proprietary applications shipping their own embedded openmpi libraries where either tm plm/ras is missing or non-functional with the torque installation on our system. So, I've come so far as to create a pbsdshwrapper.py that mimics ssh behaviour very closely so that starting the orteds on all the hosts works as expected and the application starts correctly when I use setenv OMPI_MCA_plm_rsh_agent "pbsdshwrapper.py" mpirun --hostfile $PBS_NODEFILE What I want now is a way to get rid of the --hostfile $PBS_NODEFILE in the mpirun command. Is there an environment variable that I can set so that mpirun grabs the right nodelist? By spelunking the code I find that the rsh plm has support for SGE where it automatically picks up the PE_NODEFILE if it detects that it is launched within an SGE job. Would it be possible to have the same functionality for torque? The code looks a bit too complex at first sight for me to fix this myself. Best regards, Roy. -- The Computer Center, University of Tromsø, N-9037 TROMSØ Norway. phone:+47 77 64 41 07, fax:+47 77 64 41 00 Roy Dragseth, Team Leader, High Performance Computing Direct call: +47 77 64 62 56. email: roy.drags...@uit.no ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users