Hello Artem,

Thanks for the KIP.

I have the same question as Roger on concurrent writes, and an additional
one on consumer behavior. Typically, transactions will timeout if not
committed within some time interval. With the proposed changes in this KIP,
consumers cannot consume past the ongoing transaction. I'm curious to
understand what happens if the producer dies, and does not come up and
recover the pending transaction within the transaction timeout interval. Or
are we saying that when used in this 2PC context, we should configure these
transaction timeouts to very large durations?

Thanks in advance!

Best,
Arjun


On Mon, Aug 21, 2023 at 1:06 PM Roger Hoover <roger.hoo...@gmail.com> wrote:

> Hi Artem,
>
> Thanks for writing this KIP.  Can you clarify the requirements a bit more
> for managing transaction state?  It looks like the application must have
> stable transactional ids over time?   What is the granularity of those ids
> and producers?  Say the application is a multi-threaded Java web server,
> can/should all the concurrent threads share a transactional id and
> producer?  That doesn't seem right to me unless the application is using
> global DB locks that serialize all requests.  Instead, if the application
> uses row-level DB locks, there could be multiple, concurrent, independent
> txns happening in the same JVM so it seems like the granularity managing
> transactional ids and txn state needs to line up with granularity of the DB
> locking.
>
> Does that make sense or am I misunderstanding?
>
> Thanks,
>
> Roger
>
> On Wed, Aug 16, 2023 at 11:40 PM Artem Livshits
> <alivsh...@confluent.io.invalid> wrote:
>
> > Hello,
> >
> > This is a discussion thread for
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-939%3A+Support+Participation+in+2PC
> > .
> >
> > The KIP proposes extending Kafka transaction support (that already uses
> 2PC
> > under the hood) to enable atomicity of dual writes to Kafka and an
> external
> > database, and helps to fix a long standing Flink issue.
> >
> > An example of code that uses the dual write recipe with JDBC and should
> > work for most SQL databases is here
> > https://github.com/apache/kafka/pull/14231.
> >
> > The FLIP for the sister fix in Flink is here
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=255071710
> >
> > -Artem
> >
>

Reply via email to