Jan,
I don't know of a way to do what you're suggesting
(i.e., prologue - run MPI job - epilogue) using
Globus alone. There may be ways to do this with
higher-level "workflow" tools built atop Globus.
One simple solution that came to mind that might
work is to write a script that does the prologue,
then calls "mpirun" to run the MPI job, then does
the epilogue.
Sorry that I couldn't be of more help.
Nick
On Oct 23, 2007, at 10:01 AM, Jan Ploski wrote:
Hello,
Before I can start my MPI executable, I need to perform some simple
setup,
like: create a working directory for it with a few necessary
symlinks to
configuration and master data files. Likewise, after the MPI
executable is
done running, I would like to perform some simple postprocessing/
cleanup.
I guess this sort of prologue/epilogue is not so unusual for MPI jobs,
butt I have found no way to get it done using GT 4.0.5 without
submitting
several jobs and doing synchronization between jobs on the
submitter side
(I can't submit the MPI job before the setup job has created its
working
dir and environment, obviously). Logically, and from a user's
viewpoint,
the setup and cleanup steps belong to the MPI job - I would never
want to
run any of the pre- and post steps without the MPI job, nor would I
like
anyone (who uses my job submission scripts) to be aware that there
is more
than one job involved.
I looked at multijobs briefly, but I see no way how to keep the MPI
subjob
from getting started until the setup job is done, if they were
subjobs of
a multijob. I also tried submitting just the setup job and then
forking
mpirun from it myself. Unfortunately Globus is so "kind" to wipe
out the
$PBS_NODEFILE variable from the environment, so I would not know what
-machinefile option to pass to mpirun (not to mention the fact that it
would be an ugly hack anyway).
Maybe you have an idea how to implement a setup/run MPI/cleanup trio
elegantly?
Regards,
Jan Ploski