> 
> On Apr 27, 2016, at 6:04 PM, Gourav Rattihalli <[email protected]> 
> wrote:
> 
> Hi Devs,
> 
> I have developed the script that sends jobs to Aurora scheduler, which in 
> turn works with Mesos to launch jobs on the cluster. The script has the 
> details on resource usage for the job (memory, number of cores), the job 
> image to use, etc. I was wondering which module in Airavata does something 
> similar to submit jobs to an HPC cluster via torque/slurm. Or, is there a 
> module that starts a hello-world executable on a node/cluster? I plan to use 
> it as a template to develop the module for Aurora/Mesos.

It will be GFac. You can write an Aurora implementation against the generic 
TASK interface - 
https://github.com/apache/airavata/blob/master/modules/gfac/gfac-core/src/main/java/org/apache/airavata/gfac/core/task/Task.java
 
<https://github.com/apache/airavata/blob/master/modules/gfac/gfac-core/src/main/java/org/apache/airavata/gfac/core/task/Task.java>

See a local fork submission for reference - 
https://github.com/apache/airavata/blob/master/modules/gfac/gfac-impl/src/main/java/org/apache/airavata/gfac/impl/task/LocalJobSubmissionTask.java
 
<https://github.com/apache/airavata/blob/master/modules/gfac/gfac-impl/src/main/java/org/apache/airavata/gfac/impl/task/LocalJobSubmissionTask.java>

The JobSubmissionTask extends the Task with a cancel capability, but based on 
what you want to accomplish, you can implement against either of these. 

Suresh

> 
> -- 
> Regards,
> Gourav Rattihalli

Reply via email to