Hi David,

Please check this workflow out: https://code.kepler-project.org/code/kepler/trunk/workflows/SC06-Tutorial/JobSubmission.xml.

It submits one job to a cluster and keeps checking its status until it is finished. You can edit the workflow to submit multiple jobs and check their status until all of them finish. We explained this approach in one of our paper: http://users.sdsc.edu/~jianwu/JianwuWang_files/Theoretical%20enzyme%20design%20using%20the%20Kepler%20scientific%20workflows%20on%20the%20Grid%20(ICCS2010).pdf. Figure 4 in the paper shows the workflow logic. Hope it is helpful to you.

Best wishes

Sincerely yours

Jianwu Wang, Ph.D.
[email protected]
http://users.sdsc.edu/~jianwu/

Scientific Workflow Automation Technologies (SWAT) Laboratory
San Diego Supercomputer Center
University of California, San Diego
San Diego, CA, U.S.A.


On 10/28/11 11:15 AM, David LeBauer wrote:
Hello,

I want to launch an ensemble of jobs on a remote machine. Can I
configure the JobSubmitter / Job Checker to wait until all of the jobs
reach either an error or leave the queue?

Essentially, I want to implement something like the following in kepler:
executable: server:/path/to/rundir/run
input files: configfiles{1..500}


rsync configfiles* server:/path/to/rundir/
ssh -T<server>  "cd /path/to/rundir/; for $f in configfiles*; do qsub
-cwd -N $f -pe mpich 1 -j y -o $f.log ./run 1 $f; done"

and then, when all of the jobs have status 1 or 0, trigger the next
module (which will read the output)

Thank you,

David

_______________________________________________
Kepler-users mailing list
[email protected]
http://lists.nceas.ucsb.edu/kepler/mailman/listinfo/kepler-users

Reply via email to