Hi Xue,

To give you a context, we are using Helix as the backend task execution
engine of Apache Airavata project.
It is sort of a complex process to recreate the same workflow with same
configurations as the workflow generation is done by passing through
different parsers.  As a workaround I came up with following code to
replicate the same workflow with different name. Do you think that this
feature is a good have feature in original task API? If so we can try to
add it and send a PR.

public void cloneWorkflow(String workflowName) {
    WorkflowContext workflowContext =
taskDriver.getWorkflowContext(workflowName);
    WorkflowConfig workflowConfig = taskDriver.getWorkflowConfig(workflowName);
    JobDag jobDag = workflowConfig.getJobDag();

    Set<String> allNodes = jobDag.getAllNodes();
    Map<String, JobContext> jobContextMap = new HashMap<>();
    Map<String, JobConfig> jobConfigMap = new HashMap<>();

    allNodes.stream().forEach(job -> {
        jobConfigMap.put(job, taskDriver.getJobConfig(job));
        jobContextMap.put(job, taskDriver.getJobContext(job));
    });

    Workflow.Builder workflowBuilder = new
Workflow.Builder(workflowName + "_CLONE").setExpiry(0);

    allNodes.forEach(job -> {

        List<TaskConfig> taskBuilds = new ArrayList<>();

        Map<String, TaskConfig> taskConfigMap =
jobConfigMap.get(job).getTaskConfigMap();
        taskConfigMap.forEach((id, config) -> {
            TaskConfig.Builder taskBuilder = new
TaskConfig.Builder().setTaskId(id +
"_CLONE").setCommand(config.getCommand());
            Map<String, String> orignalMap = config.getConfigMap();
            orignalMap.forEach((key, value) ->
taskBuilder.addConfig(key, value));
            taskBuilds.add(taskBuilder.build());
        });

        JobConfig.Builder jobBuilder = new JobConfig.Builder()
                .addTaskConfigs(taskBuilds)

.setFailureThreshold(jobConfigMap.get(job).getFailureThreshold())

.setMaxAttemptsPerTask(jobConfigMap.get(job).getMaxAttemptsPerTask());

        workflowBuilder.addJob(job, jobBuilder);
    });

    Map<String, Set<String>> parentsToChildren = jobDag.getParentsToChildren();

    parentsToChildren.forEach((parent, children) -> {
        children.forEach(child ->
workflowBuilder.addParentChildDependency(parent, child));
    });

    WorkflowConfig.Builder config = new
WorkflowConfig.Builder().setFailureThreshold(0);
    workflowBuilder.setWorkflowConfig(config.build());
    Workflow workflow = workflowBuilder.build();

    taskDriver.start(workflow);
}

Thanks

Dimuthu


On Thu, Sep 13, 2018 at 2:57 PM Xue Junkai <junkai....@gmail.com> wrote:

> Hi Dlmuthu,
>
> Currently, Helix does not support rerun workflow feature. If you would like
> to reexecute the workflow, please submit a new one.
>
> Or for your scenario, was the workflow caused by job failing? If yes, you
> can increase the number of failed threshold for job level and task level,
> which can keep the tasks retrying and not failing the workflow.
>
> Hope this answer your question.
>
> Best,
>
> Junkai
>
> On Wed, Sep 12, 2018 at 11:49 AM kishore g <g.kish...@gmail.com> wrote:
>
> > Lei, do you know if there is a way to restart the workflow?
> >
> > On Wed, Sep 12, 2018 at 10:07 AM DImuthu Upeksha <
> > dimuthu.upeks...@gmail.com>
> > wrote:
> >
> > > Any update on this ?
> > >
> > > On Wed, Apr 4, 2018 at 9:10 AM DImuthu Upeksha <
> > dimuthu.upeks...@gmail.com
> > > >
> > > wrote:
> > >
> > > > Hi Folks,
> > > >
> > > > I'm running 50 -100 Helix Task Workflows at a time and due to some
> > > > unexpected issues, some workflows go into the failed state. Is there
> a
> > > way
> > > > I can retry those workflows from the beginning or clone new workflows
> > > from
> > > > them and run as fresh workflows?
> > > >
> > > > Thanks
> > > > Dimuthu
> > > >
> > >
> >
>
>
> --
> Junkai Xue
>

Reply via email to