Hi,

 

I have long been puzzled by this - how exactly can we send a workflow to a
cluster (say SGE) such that heavy jobs included in that workflow can be
distributed onto different nodes inside the cluster?

 

I noticed that the workflow can be executed outside Taverna using command
lines executeworkflow.sh (is that true for Taverna 2.X? Seems I cannot find
this shell script in Taverna 2.1).  If I run a workflow in this way from one
of the nodes in the cluster, how should I do to distribute that workflow to
different nodes?  Not sure if I make myself clear enough, from my previous
experience working on the cluster, I usually cut a big job into small pieces
which are distributed by SGE queue.  At the end, I use another script to
merge  results returned from different nodes.  But I cannot think of a way
to use that practice on a workflow.

 

Any inputs would be highly appreciated.

 

Fan   

 

------------------------------------------------------------------------------
_______________________________________________
taverna-users mailing list
[email protected]
[email protected]
Web site: http://www.taverna.org.uk
Mailing lists: http://www.taverna.org.uk/taverna-mailing-lists/

Reply via email to