HI Walter:

Not sure if I understood properly what you are assessing here.

The idea is to have a
1) core component (let's say an interface job service) that is
responsible of the logic of the job service (including storage)
2) two different types of transport (in vm) deployed within the
runtime and just passing pojo from the runtime to the job service and
a different as a rest client invoking the job service in a distributed tx.

It does not really matter where the transaction is executed because
the only difference is the transport. Forget for an instance the
reactive stuff for now.

if you have an in-vm deployed / colocated / embedded (same jvm) and
there is an overdue trigger... the lifecycle will still be the same
(scheduled - completed) but the entire thing will happen in the same
tx and jvm.

if you have in a different subsystem (2 jvm) what will happen is that
the way you send the message will should be involved in the tx so what
will you have is

# runtime
executed some stuff
create job service
   -send message to job service

# job service

the other jvm
consume message
create job
schedule for execution

that will be the other tx in the second jvm


that is what we want to achieve. There might be cases where reactive
might work out or not, we should design this properly.


It is not possible to make all combinations available to the end user
(and it is not even needed - it won't add any value)



El jue, 16 nov 2023 a las 15:32, Walter Medvedeo
(<[email protected]>) escribió:
>
> Maybe I'm overthinking, or having a technical gap :(, but what I don't see 
> right now guys is how to solve this challenge. (in case of option2)
>
> Let me explain in a very simplified way. At some point in time we need to 
> persist the runtime info and the jobs service info in single write.
>
> Scenario1: (single write)
>
> 1) OpenTransaction
>
>     2) Write process the process information with a non-reactive datasource.
>     3) Write the job-service related information to create a job, this time 
> we use not only a different datasource, but also a reactive-datasource.
>
> 4) Commit Transaction
>
> What I don't see is how this will work.
>
>
> Scenario2: (overdue trigger)
>
> 1) OpenTransaction
>
>     2) Write process the process information with a non-reactive datasource.
>     3) Write the job-service related information to create a job, this time 
> we use not only a different datasource, but also a reactive-datasource.
>     3.5) It's detected that the job is overdue and must be executed, let's 
> execute it.
>
> 4) Commit Transaction
> 5) (another posibility instead of 3.5)
> After the commit, it's detected that the job is overdue and must be executed, 
> let's execute it.
>
> Now, where do the code to achieve the execution of 3.5, or 5 execute? in the 
> kogito-runtimes JVM or in the jobs-service JVM?
>
> Regards,
> Walter.
>
>
> On 2023/11/16 10:38:13 Enrique Gonzalez Martinez wrote:
> > Hi Walter:
> >
> > 1) you can still be reactive if you like if you build a transport
> > layer being reactive. so moving that to the transport layer you will
> > have the same benefits not blocking the IO at the endpoint level IMO.
> > In any case using this sort of approach.
> > 2) that should not invalidate the logic. It is to say. The job service
> > will the the create - schedule - fire execution in place.
> >
> > Ideally this should be like
> >
> > as colocated service
> > runtime  -> job service API -> embedded job service proxy (in vm
> > transport) -> job service
> >
> > as distributed
> > runtime -> job service API -> rest job service proxy (rest client) ||
> > wire || ->  endpoint (reactive or not) -> job service
> >
> >
> >
> >
> >
> > El jue, 16 nov 2023 a las 10:15, Walter Medvedeo
> > (<[email protected]>) escribió:
> > >
> > > Hi Alex, and guys, thanks for starting discussion.
> > >
> > > Let me add some comments that I think might be useful to consider for the 
> > > evaluation/implementation.
> > >
> > > Regarding option 2) we must keep in mind that right now, the jobs service 
> > > DB writes are reactive, as well as the scheduling process, etc. If we 
> > > keep this approach, I think we have a potential challenge here on how to 
> > > integrate the non reactive kogito-runtime DB writes (and it's synchronous 
> > > execution model in general) with the subsequent reactive schedule of the 
> > > job. Since in the end, we want all this to happen in the same transaction.
> > > What are the plan in that regard?
> > >
> > >
> > > Another thing to keep in mind is the treatment of overdue jobs. There are 
> > > situations where the job being created has already overpassed the 
> > > execution time, in these cases, it's automatically fired, and the 
> > > corresponding associated action is executed immediatelly. If I don't 
> > > recall wrong, this is decided as part of the scheduling process. I think 
> > > we must be sure that we can keep the possibility to fire ouverdue jobs 
> > > (something that is optional configurable) at the same time we don't want 
> > > the execution of the job is produced in the kogito-runtimes side but in 
> > > the jobs-service.
> > >
> > > Regards,
> > > Walter.
> > >
> > >
> > > On 2023/11/15 20:58:11 Alex Porcelli wrote:
> > > > Similar to Enrique, I'd +1 the second option.. as it aligns better
> > > > with the current data-index approach. It would allow, for sake of
> > > > simplicity.
> > > >
> > > > Are we ok if we move forward with the second option as a proposal to
> > > > be implemented?
> > > >
> > > >
> > > > On Tue, Nov 14, 2023 at 4:40 AM Enrique Gonzalez Martinez
> > > > <[email protected]> wrote:
> > > > >
> > > > > That case is fixed as it uses the EventPublisher interface.
> > > > >
> > > > > El mar, 14 nov 2023, 10:25, Pere Fernandez (apache) 
> > > > > <[email protected]>
> > > > > escribió:
> > > > >
> > > > > > Something we may also need to think about is the communication 
> > > > > > between the
> > > > > > Job-Service and the Data-Index. Currently it Job-Service needs to 
> > > > > > message
> > > > > > the Data-Index when a Job status changes. I think that for the 
> > > > > > purpose of
> > > > > > the Simplified Architecture it should be able to write directly 
> > > > > > into the
> > > > > > Data-Index DB, in a similar manner as the runtime does when the
> > > > > > `kogito-addons-quarkus-data-index-persistence` addons are present.
> > > > > >
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: [email protected]
> > > > For additional commands, e-mail: [email protected]
> > > >
> > > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: [email protected]
> > > For additional commands, e-mail: [email protected]
> > >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [email protected]
> > For additional commands, e-mail: [email protected]
> >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to