wmedvede commented on code in PR #628:
URL: 
https://github.com/apache/incubator-kie-kogito-docs/pull/628#discussion_r1609751378


##########
serverlessworkflow/modules/ROOT/pages/cloud/operator/using-persistence.adoc:
##########
@@ -1,26 +1,33 @@
-= Using persistence in the SonataFlow Workflow CR
+= Using persistence in {product_name} workflows
 :compat-mode!:
 // Metadata:
 :description: Using persistence in the workflow instance to store its context
 :keywords: sonataflow, workflow, serverless, operator, kubernetes, persistence
 
+This document describes how to configure a SonataFlow instance to use 
persistence and store the workflow context in a relational database.
 
-This document describes how to configure a SonataFlow instance to use 
persistence to store the flow's context in a relational database.
+Kubernetes's pods are stateless by definition. In some scenarios, this can be 
a challenge for workloads that require maintaining the status of 
+the application regardless of the pod's lifecycle. In the case of 
{product_name}, by default, the context of the workflow is lost when the pod 
restarts.
 
-== Configuring the SonataFlow CR to use persistence
+If your workflow requires recovery from such scenarios, you must provide 
additional configuration to enable the 
xref:persistence/core-concepts.adoc#_workflow_runtime_persistence[Workflow 
Runtime Persistence].
+That configuration must be provided by using the 
<<_configuring_the_persistence_using_the_sonataflowplatform_cr, 
SonataFlowPlatform CR>> or the 
<<_configuring_the_persistence_using_the_sonataflow_cr, SonataFlow CR>>, and 
has different scopes depending on each case.
 
-Kubernetes's pods are stateless by definition. In some scenarios, this can be 
a challenge for workloads that require maintaining the status of 
-the application regardless of the pod's lifecycle. In the case of 
{product_name}, the context of the workflow is lost when the pod restarts.
-If your workflow requires recovery from such scenarios, you have to make these 
additions to your workflow CR:
-Use the `persistence` field in the `SonataFlow` workflow spec to define the 
database service located in the same cluster.
-There are 2 ways to accomplish this:
+== Configuring the persistence using the SonataFlowPlatform CR
 
-* Using the Platform CR's defined persistence
-When the Platform CR is deployed with its persistence spec populated it 
enables workflows to leverage its configuration to populate the persistence
-properties in the workflows. 
+The SonataFlowPlatform CR facilitates the configuration of the persistence 
with namespace scope. It means that it will be automatically applied to all the 
workflows deployed in
+that namespace. This can be useful to reduce the amount resources to 
configure, for example, in cases where the workflows deployed in that namespace 
belongs to the same application, etc.
+That decision is left to each particular use case, however, it's important to 
know, that this configuration can be overridden by any workflow in that 
namespace by using the <<_configuring_the_persistence_using_the_sonataflow_cr, 
SonataFlow CR>>.
 
-[source,yaml,subs="attributes+"]
----
+[NOTE]
+====
+Persistence configurations are applied at workflow deployment time, and 
potential changes in the SonataFlowPlatform will not impact already deployed 
workflows.

Review Comment:
   Good catch, I didn't know it was expected to work this way, so this is why I 
added this note as a warning.
   
   So, I did some tests and it looks like it works more or less this way:
    
   I have changed for example the SFP platform persistence, and it looks like 
the DI, the JS, and the workflow are all restarted. (I mean, not in case the 
pod restarts, it was forced to restart)
   
   However, we have an issue :angry: with the jobs-service in that case.
   
   The job-service has two database connections:
   
   One connection string, is passed as an env var `QUARKUS_DATASOURCE_JDBC_URL` 
in the Deployment.
   All good, this connection is updated as part of the restart I commented 
above.
   
   The second connection string, is passed in the jobs-service-properties 
config map :angry: , with a property `quarkus.datasource.reactive.url`. (IDK 
why it was done this way)
   
   But the config map is not updated as part of the restart procedure.  So, the 
JS restart, is not working well  :angry:, since the connection passed as env 
var points to the new database (was refreshed as part of the restart 
procedure), and the connection passed in the jobs-service-config map points to 
the old-one (not refreshed as part of the procedure). :angry: 
   
   While not a blocker, we can't say that this sort of casecade update is 
working 100% well.
   
   So, I have opened an issue for this situation: 
https://github.com/apache/incubator-kie-kogito-serverless-operator/issues/468
   
   Alternatives here:
   
   1) we update the documentation saying that this automatic restart/refresh 
exists, and immediately add a know issue saying that it's not working.
   
   2) we let the documentation as is to not confuse people.
   
   wdyt? @domhanak @ricardozanini 
   
   
   
   
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to