I am yet to finish up on Airavata API thrift files related to orchestrator. But
just committed partial files to -
https://svn.apache.org/repos/asf/airavata/trunk/modules/thrift-interfaces/
I will finish and ask for a larger feedback on all the API methods, but related
to this discussion, can we
I have two minds on the "configure experiment" method. On the one hand,
most of the gateways we are taking use cases from already have a local
persistence mechanism for this, so we don't have a driver. And I'm sure
there will be implementation subtleties. On the other hand, it would be
a good featu
On Sun, Jan 19, 2014 at 4:03 PM, Lahiru Gunathilake wrote:
> Hi saminda,
>
> I am writing this to clarify the CIPRES scenario, please correct me if I
> am wrong.
>
> CIPRES users create experiments with all the parameters.
>
> Easy step is they simply give the input values and run jobs (because t
Hi saminda,
I am writing this to clarify the CIPRES scenario, please correct me if I am
wrong.
CIPRES users create experiments with all the parameters.
Easy step is they simply give the input values and run jobs (because they
store job related configuration to application descriptor, and doesn'
On Jan 19, 2014, at 12:38 PM, Saminda Wijeratne wrote:
> My initial idea is to have an experiment template saved and later users would
> launch a experiment template as much as they would want each time creating an
> experiment only at the launch. If users want to make small changes, they
> c
My initial idea is to have an experiment template saved and later users
would launch a experiment template as much as they would want each time
creating an experiment only at the launch. If users want to make small
changes, they could take the template, change it and save it again either
to a new t
I see Amila’s point and can be argued that, Airavata Client can fetch
experiment, modify what is needed and re-submit as a new experiment.
But I agree with Saminda, if an experiment has dozens of inputs and if say only
parameter or scheduling info needs to be changes, cloning makes it useful. Th
an experiment will not define new descriptors but rather point to an
existing descriptor(s). IMO (correct me if I'm wrong),
Experiment = Application + Input value(s) for application + Configuration
data for managing job
Application = Service Descriptor + Host Descriptor + Application Descriptor
This seems like adding new experiment definition. (i.e. new descriptors).
As far as I understood this should be handled at UI layer (?). For the
backend it will just be new descriptor definitions (?).
Maybe I am missing something.
- AJ
On Fri, Jan 17, 2014 at 1:15 PM, Saminda Wijeratne wrote:
>
This was in accordance with the CIPRES usecase scenario where users would
want to rerun their tasks but with subset of slightly different
parameters/input. This is particularly useful for them because their tasks
can include more than 20-30 parameters most of the time.
On Fri, Jan 17, 2014 at 6:4
Hi Amila,
The use of the word "cloning" is misleading.
Saminda suggested that, we would need to run the application in a different
host ( based on the users intuition of the host availability/ efficiency)
keeping all the other variables constant( inputs changes are also allowed).
As an example: i
On Thu, Jan 16, 2014 at 10:58 AM, Sachith Withana wrote:
> Hi All,
>
> This is the summary of the meeting we had Wednesday( 01/16/14) on the
> Orchestrator.
>
> Orchestrator Overview
> I Introduced the Orchestrator and I have attached the presentation
> herewith.
>
> Adding Job Cloning capability
Thanks Sachith for this overview talk. Nice summary.
Suresh
On Jan 16, 2014, at 10:58 AM, Sachith Withana wrote:
> Hi All,
>
> This is the summary of the meeting we had Wednesday( 01/16/14) on the
> Orchestrator.
>
> Orchestrator Overview
> I Introduced the Orchestrator and I have attached t
13 matches
Mail list logo