Hi Deepak,

This is about FINERACT-2485

Thanks for the detailed analysis, especially the concurrency issue around
simultaneous duplicate requests. The DB unique constraint approach for
atomic locking makes a lot of sense as a first line of defense.

While exploring the same flow, I was also thinking about how to handle
post-execution retries from the client side (e.g., when the response is
lost after a successful transaction).

In such cases, instead of returning a 409, would it make sense to extend
the design with a response replay mechanism (e.g., storing status like
IN_PROGRESS/SUCCESS/FAILED along with response payload)?

This could allow:

   -

   Returning the same response for duplicate successful requests
   -

   Handling in-progress requests more gracefully
   -

   Improving client experience in unstable network conditions

Do you think this could complement the DB constraint approach, or would it
introduce unnecessary complexity in the current architecture?

Would love to hear your thoughts.

Reply via email to