Hi Artem,

Thanks for writing this KIP.  Can you clarify the requirements a bit more
for managing transaction state?  It looks like the application must have
stable transactional ids over time?   What is the granularity of those ids
and producers?  Say the application is a multi-threaded Java web server,
can/should all the concurrent threads share a transactional id and
producer?  That doesn't seem right to me unless the application is using
global DB locks that serialize all requests.  Instead, if the application
uses row-level DB locks, there could be multiple, concurrent, independent
txns happening in the same JVM so it seems like the granularity managing
transactional ids and txn state needs to line up with granularity of the DB
locking.

Does that make sense or am I misunderstanding?

Thanks,

Roger

On Wed, Aug 16, 2023 at 11:40 PM Artem Livshits
<alivsh...@confluent.io.invalid> wrote:

> Hello,
>
> This is a discussion thread for
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-939%3A+Support+Participation+in+2PC
> .
>
> The KIP proposes extending Kafka transaction support (that already uses 2PC
> under the hood) to enable atomicity of dual writes to Kafka and an external
> database, and helps to fix a long standing Flink issue.
>
> An example of code that uses the dual write recipe with JDBC and should
> work for most SQL databases is here
> https://github.com/apache/kafka/pull/14231.
>
> The FLIP for the sister fix in Flink is here
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=255071710
>
> -Artem
>

Reply via email to