Thank you Francisco to getting deeper on this…

Looking forward to see the results of your suggested improvements.


On Fri, Nov 24, 2023 at 9:40 AM Francisco Javier Tirado Sarti <
[email protected]> wrote:

> I forgot to attach the queries
>
> On Fri, Nov 24, 2023 at 3:04 PM Francisco Javier Tirado Sarti <
> [email protected]> wrote:
>
>> Hi,
>> A brief update on this topic.
>> After doing a simple test with example
>> https://github.com/apache/incubator-kie-kogito-examples/tree/stable/serverless-workflow-examples/serverless-workflow-data-index-quarkus,
>> the number of updates over Nodes table is n*n, so we manage to obtain a
>> perfect quadratic performance degradation. The problem is worse in the case
>> of Serverless Workflow than in BPMN because we the number of nodes is
>> greater than the number of states. In that example N is 16, but for a more
>> complex workflow it would be certainly large.
>> I think that this is more related to how we are handling JPA in the code,
>> in particular the mapping from model to entity (basically JPA is blind and
>> has to update all nodes for every write because it believes the node has
>> been updated, although it is not) than an issue in the table definition.
>> In fact, when using JPA, separating the server model from the JPA entity is
>> not a good idea, especially if the entity contains collections. I will try
>> to change that without breaking anything.
>>
>> On Wed, Nov 22, 2023 at 12:10 PM Enrique Gonzalez Martinez <
>> [email protected]> wrote:
>>
>>> After the events split you now will need to create a node instance
>>> model instance of making independent from the process instance.
>>> That should do the trick.
>>>
>>> Regarding deleting/inserting it was fixed at some point.
>>>
>>> El mar, 21 nov 2023 a las 20:22, Francisco Javier Tirado Sarti
>>> (<[email protected]>) escribió:
>>> >
>>> > Hi Martin,
>>> > I have a task to review performance of
>>> >
>>> > ProcessInstanceNodeDataEventMerger
>>> > My idea is to reduce the number of delete inserts when processing
>>> events
>>> > and try to do it incremental.
>>> > That should improve performance.
>>> > PS:
>>> > I was planning to send an e-mail tomorrow announcing that in case you
>>> were
>>> > already working on a fix for that. I assume you are not and I would be
>>> > sending a PR soon.
>>> >
>>> > On Tue, Nov 21, 2023 at 6:09 PM Martin Weiler <[email protected]
>>> >
>>> > wrote:
>>> >
>>> > > I looked into the new examples using data-index persistence addon -
>>> Neus'
>>> > > PR#1813 [1] for serverless and Pere's branch [2] for workflow (great
>>> job
>>> > > both!) - and they work without issues using single requests.
>>> However, under
>>> > > some load (I used 'ab' for testing with a light concurrency of 10
>>> parallel
>>> > > requests) I ran into the following problems:
>>> > >
>>> > > (1) Large number of insert/delete calls (eg. for tables such as
>>> nodes,
>>> > > definitions, etc)
>>> > >
>>> > > (2) Hibernate OptimisticLockExceptions / StaleStateExceptions
>>> > >
>>> > > (3) DB deadlocks
>>> > >
>>> > > (4) Error responses, slow response times
>>> > >
>>> > > The reason I am reaching out with this topic here is to find out if
>>> we are
>>> > > aware of this issue, and if someone is already looking into or being
>>> > > assigned to it?
>>> > >
>>> > > Thanks,
>>> > > Martin
>>> > >
>>> > > [1]
>>> https://github.com/apache/incubator-kie-kogito-examples/pull/1813
>>> > > [2]
>>> > >
>>> https://github.com/pefernan/kogito-examples/tree/example_data-index_persistence
>>> > >
>>> > > ---------------------------------------------------------------------
>>> > > To unsubscribe, e-mail: [email protected]
>>> > > For additional commands, e-mail: [email protected]
>>> > >
>>> > >
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: [email protected]
>>> For additional commands, e-mail: [email protected]
>>>
>>>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]

Reply via email to