Hi Francisco,
Workflow engine is based at the very core in two different primitives:
events and activities.

Events are something that happens that initiates and execution.
Catch/throw events of any type, timers, conditions...
An activity is something that produces some sort of computation unit like
script, user taks, etc..

Anything that starts a process should be achieved by an event (start), and
a human task is not really an event of any kind.

There is a primitive called start event with parallel multiple set to true
that covers multiple events definitions that must happen in order to start
a process. We don't have this sort of implementation right now in the
engine.( It is really hard to make it work and requires careful design) but
should be your use case if we had it. One event for process being finished
and other event triggered by some human sending a start with its inputs.

That being said, about this idea:

This sort of idea will introduce into the very core of the engine human
tasks as events. I dont see any justification to do that.

As we did mention at some point, human task will become an entire subsystem
on its own as it does not fit the requirements to be a proper subsystem
with the features we had in v7.

This introduce a concept of process orchestration based on humans input
which defeats the purpose of a workflow as you are introducing an arbitrary
way of subprocess execution or interdependent process based on humans. It
is not the same of using the output of a human task to trigger the
execution of a subprocess that using human input as a some sort of gateway
event.

How to perform this:
As Alex mention you can achieve this in a very simple way.
1. Process finish and send a message to a kafka queue
2 - third party system gets the events and allows you to manipulate the
input
3 - it sends to a queue
4 - process listening to kafka triggers a start event.

If you dont like the third party system you can create a very simple
process that reads from a stream, allows you to modify the input and sends
the outcome to another stream.

Cheers



El lun, 6 may 2024, 17:16, Francisco Javier Tirado Sarti <
[email protected]> escribió:

> Alex, I might be missing something, but I do not think this scenario can be
> covered through event consumption. The key part is that workflows of type B
> are manually executed by users, which will provide its own set of
> parameters. Workflow of type A is just setting a variable context which is
> shared by all workflows of type B. To simulate such context without
> introducing the concept into the workflow definition itself, the properties
> setup by A should be passed as input of B.
>
> On Mon, May 6, 2024 at 5:05 PM Alex Porcelli <[email protected]> wrote:
>
> > Isn’t this already achievable using events with different topics?
> >
> > -
> > Alex
> >
> >
> > On Mon, May 6, 2024 at 11:02 AM Francisco Javier Tirado Sarti <
> > [email protected]> wrote:
> >
> > > Hi,
> > > This is related with issue
> > > https://github.com/apache/incubator-kie-kogito-runtimes/issues/3495
> > > We have one user which would like to reuse the result of one workflow
> > > execution (let's call this workflow of type  A) as input of several
> > > workflows (lets call them  workflows of type B)
> > >
> > > Workflow A is executed before all B workflows. Then B workflows are
> > > manually executed by users. The desired input of B workflows should
> be a
> > > merge of what the user provides when performing the start request and
> the
> > > output of workflow A. In order to achieve this, it is expected that
> users
> > > include, in the start request of workflow of type B,  the process
> > instance
> > > id of workflow A (so rather than taking the output of A and merging it
> > for
> > > every call, they just pass the process instance id)
> > >
> > > In order for this approach to work, output of workflow A has to be
> stored
> > > somewhere in the DB (Currently runtimes DB only stores active process
> > > information). Since we do not want all process to keep their output
> > > information in the DB (only workflows of type A), workflows of type A
> has
> > > to be identified somehow
> > >
> > > But before entering into more implementation details, which I would
> like
> > to
> > > know is if this is a valid case both for BPMN or not. The
> implementation
> > > implications are pretty relevant. If a valid use case for both BPMN and
> > > SWF, we can implement this functionality in the kogito core, there we
> can
> > > take advantage of existing persistence addons and add the newly
> required
> > > storage there. If not, we need to provide a SWF specific addon for each
> > > existing persistence add-on with the additional storage.
> > > Let's share your thoughts.
> > > Thanks in advance.
> > >
> >
>

Reply via email to