Re: Exactly once kafka connect query

2023-03-13 Thread NITTY BENNY
Hi Chris,

I really don't understand why a graceful shutdown will happen during a
commit operation? Am I understanding something wrong here?. I see
this happens when I have a batch of 2 valid records and in the second batch
the record is invalid. In that case I want to commit the valid records. So
I called commit and sent an empty list for the current batch to poll() and
then when the next file comes in and poll sees new records, I see
InvalidProducerEpochException.
Please advise me.

Thanks,
Nitty

On Mon, Mar 13, 2023 at 5:33 PM NITTY BENNY  wrote:

> Hi Chris,
>
> The difference is in the Task Classes, no difference for value/key
> convertors.
>
> I don’t see log messages for graceful shutdown. I am not clear on what you
> mean by shutting down the task.
>
> I called the commit operation for the successful records. Should I perform
> any other steps if I have an invalid record?
> Please advise.
>
> Thanks,
> Nitty
>
> On Mon, Mar 13, 2023 at 3:42 PM Chris Egerton 
> wrote:
>
>> Hi Nitty,
>>
>> Thanks again for all the details here, especially the log messages.
>>
>> > The below mentioned issue is happening for Json connector only. Is there
>> any difference with asn1,binary,csv and json connector?
>>
>> Can you clarify if the difference here is in the Connector/Task classens,
>> or if it's in the key/value converters that are configured for the
>> connector? The key/value converters are configured using the
>> "key.converter" and "value.converter" property and, if problems arise with
>> them, the task will fail and, if it has a non-empty ongoing transaction,
>> that transaction will be automatically aborted since we close the task's
>> Kafka producer when it fails (or shuts down gracefully).
>>
>> With regards to these log messages:
>>
>> > org.apache.kafka.common.errors.ProducerFencedException: There is a newer
>> producer with the same transactionalId which fences the current one.
>>
>> It looks like your tasks aren't shutting down gracefully in time, which
>> causes them to be fenced out by the Connect framework later on. Do you see
>> messages like "Graceful stop of task  failed" in the logs
>> for
>> your Connect worker?
>>
>> Cheers,
>>
>> Chris
>>
>> On Mon, Mar 13, 2023 at 10:58 AM NITTY BENNY 
>> wrote:
>>
>> > Hi Chris,
>> >
>> > As you said, the below message is coming when I call an abort if there
>> is
>> > an invalid record, then for the next transaction I can see the below
>> > message and then the connector will be stopped.
>> > 2023-03-13 14:28:26,043 INFO [json-sftp-source-connector|task-0]
>> Aborting
>> > transaction for batch as requested by connector
>> > (org.apache.kafka.connect.runtime.ExactlyOnceWorkerSourceTask)
>> > [task-thread-json-sftp-source-connector-0]
>> > 2023-03-13 14:28:26,043 INFO [json-sftp-source-connector|task-0]
>> [Producer
>> > clientId=connector-producer-json-sftp-source-connector-0,
>> > transactionalId=connect-cluster-json-sftp-source-connector-0] Aborting
>> > incomplete transaction (org.apache.kafka.clients.producer.KafkaProducer)
>> > [task-thread-json-sftp-source-connector-0]
>> >
>> > The issue with InvalidProducerEpoch is happening when I call the commit
>> if
>> > there is an invalid record, and in the next transaction I am getting
>> > InvalidProducerEpoch Exception and the messages are copied in the
>> previous
>> > email. I don't know if this will also be fixed by your bug fix.I am
>> using
>> > kafka 3.3.1 version as of now.
>> >
>> > Thanks,
>> > Nitty
>> >
>> >
>> > On Mon, Mar 13, 2023 at 10:47 AM NITTY BENNY 
>> wrote:
>> >
>> > > Hi Chris,
>> > >
>> > > The below mentioned issue is happening for Json connector only. Is
>> there
>> > > any difference with asn1,binary,csv and json connector?
>> > >
>> > > Thanks,
>> > > Nitty
>> > >
>> > > On Mon, Mar 13, 2023 at 9:16 AM NITTY BENNY 
>> > wrote:
>> > >
>> > >> Hi Chris,
>> > >>
>> > >> Sorry Chris, I am not able to reproduce the above issue.
>> > >>
>> > >> I want to share with you one more use case I found.
>> > >> The use case is in the first batch it returns 2 valid records and
>> then
>> > in
>> > >> the next batch it is an invalid record.Below is the transaction_state
>> > >> topic, when I call a commit while processing an invalid record.
>> > >>
>> > >>
>> >
>> connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
>> > >> producerId=11, producerEpoch=2, txnTimeoutMs=6, state=*Ongoing*,
>> > >> pendingState=None, topicPartitions=HashSet(streams-input-2),
>> > >> txnStartTimestamp=1678620463834,
>> txnLastUpdateTimestamp=1678620463834)
>> > >>
>> > >> then after some time I saw the below states as well,
>> > >>
>> > >>
>> >
>> connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
>> > >> producerId=11, producerEpoch=3, txnTimeoutMs=6,
>> > state=*PrepareAbort*,
>> > >> pendingState=None, topicPartitions=HashSet(streams-input-2),
>> > >

Re: Exactly once kafka connect query

2023-03-13 Thread NITTY BENNY
Hi Chris,

The difference is in the Task Classes, no difference for value/key
convertors.

I don’t see log messages for graceful shutdown. I am not clear on what you
mean by shutting down the task.

I called the commit operation for the successful records. Should I perform
any other steps if I have an invalid record?
Please advise.

Thanks,
Nitty

On Mon, Mar 13, 2023 at 3:42 PM Chris Egerton 
wrote:

> Hi Nitty,
>
> Thanks again for all the details here, especially the log messages.
>
> > The below mentioned issue is happening for Json connector only. Is there
> any difference with asn1,binary,csv and json connector?
>
> Can you clarify if the difference here is in the Connector/Task classens,
> or if it's in the key/value converters that are configured for the
> connector? The key/value converters are configured using the
> "key.converter" and "value.converter" property and, if problems arise with
> them, the task will fail and, if it has a non-empty ongoing transaction,
> that transaction will be automatically aborted since we close the task's
> Kafka producer when it fails (or shuts down gracefully).
>
> With regards to these log messages:
>
> > org.apache.kafka.common.errors.ProducerFencedException: There is a newer
> producer with the same transactionalId which fences the current one.
>
> It looks like your tasks aren't shutting down gracefully in time, which
> causes them to be fenced out by the Connect framework later on. Do you see
> messages like "Graceful stop of task  failed" in the logs for
> your Connect worker?
>
> Cheers,
>
> Chris
>
> On Mon, Mar 13, 2023 at 10:58 AM NITTY BENNY  wrote:
>
> > Hi Chris,
> >
> > As you said, the below message is coming when I call an abort if there is
> > an invalid record, then for the next transaction I can see the below
> > message and then the connector will be stopped.
> > 2023-03-13 14:28:26,043 INFO [json-sftp-source-connector|task-0] Aborting
> > transaction for batch as requested by connector
> > (org.apache.kafka.connect.runtime.ExactlyOnceWorkerSourceTask)
> > [task-thread-json-sftp-source-connector-0]
> > 2023-03-13 14:28:26,043 INFO [json-sftp-source-connector|task-0]
> [Producer
> > clientId=connector-producer-json-sftp-source-connector-0,
> > transactionalId=connect-cluster-json-sftp-source-connector-0] Aborting
> > incomplete transaction (org.apache.kafka.clients.producer.KafkaProducer)
> > [task-thread-json-sftp-source-connector-0]
> >
> > The issue with InvalidProducerEpoch is happening when I call the commit
> if
> > there is an invalid record, and in the next transaction I am getting
> > InvalidProducerEpoch Exception and the messages are copied in the
> previous
> > email. I don't know if this will also be fixed by your bug fix.I am using
> > kafka 3.3.1 version as of now.
> >
> > Thanks,
> > Nitty
> >
> >
> > On Mon, Mar 13, 2023 at 10:47 AM NITTY BENNY 
> wrote:
> >
> > > Hi Chris,
> > >
> > > The below mentioned issue is happening for Json connector only. Is
> there
> > > any difference with asn1,binary,csv and json connector?
> > >
> > > Thanks,
> > > Nitty
> > >
> > > On Mon, Mar 13, 2023 at 9:16 AM NITTY BENNY 
> > wrote:
> > >
> > >> Hi Chris,
> > >>
> > >> Sorry Chris, I am not able to reproduce the above issue.
> > >>
> > >> I want to share with you one more use case I found.
> > >> The use case is in the first batch it returns 2 valid records and then
> > in
> > >> the next batch it is an invalid record.Below is the transaction_state
> > >> topic, when I call a commit while processing an invalid record.
> > >>
> > >>
> >
> connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
> > >> producerId=11, producerEpoch=2, txnTimeoutMs=6, state=*Ongoing*,
> > >> pendingState=None, topicPartitions=HashSet(streams-input-2),
> > >> txnStartTimestamp=1678620463834, txnLastUpdateTimestamp=1678620463834)
> > >>
> > >> then after some time I saw the below states as well,
> > >>
> > >>
> >
> connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
> > >> producerId=11, producerEpoch=3, txnTimeoutMs=6,
> > state=*PrepareAbort*,
> > >> pendingState=None, topicPartitions=HashSet(streams-input-2),
> > >> txnStartTimestamp=1678620463834, txnLastUpdateTimestamp=1678620526119)
> > >>
> >
> connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
> > >> producerId=11, producerEpoch=3, txnTimeoutMs=6,
> > state=*CompleteAbort*,
> > >> pendingState=None, topicPartitions=HashSet(),
> > >> txnStartTimestamp=1678620463834, txnLastUpdateTimestamp=1678620526121)
> > >>
> > >> Later for the next transaction, when it returns the first batch below
> is
> > >> the logs I can see.
> > >>
> > >>  Transiting to abortable error state due to
> > >> org.apache.kafka.common.errors.InvalidProducerEpochException: Producer
> > >> attempted to produc

Re: Exactly once kafka connect query

2023-03-13 Thread Chris Egerton
Hi Nitty,

Thanks again for all the details here, especially the log messages.

> The below mentioned issue is happening for Json connector only. Is there
any difference with asn1,binary,csv and json connector?

Can you clarify if the difference here is in the Connector/Task classens,
or if it's in the key/value converters that are configured for the
connector? The key/value converters are configured using the
"key.converter" and "value.converter" property and, if problems arise with
them, the task will fail and, if it has a non-empty ongoing transaction,
that transaction will be automatically aborted since we close the task's
Kafka producer when it fails (or shuts down gracefully).

With regards to these log messages:

> org.apache.kafka.common.errors.ProducerFencedException: There is a newer
producer with the same transactionalId which fences the current one.

It looks like your tasks aren't shutting down gracefully in time, which
causes them to be fenced out by the Connect framework later on. Do you see
messages like "Graceful stop of task  failed" in the logs for
your Connect worker?

Cheers,

Chris

On Mon, Mar 13, 2023 at 10:58 AM NITTY BENNY  wrote:

> Hi Chris,
>
> As you said, the below message is coming when I call an abort if there is
> an invalid record, then for the next transaction I can see the below
> message and then the connector will be stopped.
> 2023-03-13 14:28:26,043 INFO [json-sftp-source-connector|task-0] Aborting
> transaction for batch as requested by connector
> (org.apache.kafka.connect.runtime.ExactlyOnceWorkerSourceTask)
> [task-thread-json-sftp-source-connector-0]
> 2023-03-13 14:28:26,043 INFO [json-sftp-source-connector|task-0] [Producer
> clientId=connector-producer-json-sftp-source-connector-0,
> transactionalId=connect-cluster-json-sftp-source-connector-0] Aborting
> incomplete transaction (org.apache.kafka.clients.producer.KafkaProducer)
> [task-thread-json-sftp-source-connector-0]
>
> The issue with InvalidProducerEpoch is happening when I call the commit if
> there is an invalid record, and in the next transaction I am getting
> InvalidProducerEpoch Exception and the messages are copied in the previous
> email. I don't know if this will also be fixed by your bug fix.I am using
> kafka 3.3.1 version as of now.
>
> Thanks,
> Nitty
>
>
> On Mon, Mar 13, 2023 at 10:47 AM NITTY BENNY  wrote:
>
> > Hi Chris,
> >
> > The below mentioned issue is happening for Json connector only. Is there
> > any difference with asn1,binary,csv and json connector?
> >
> > Thanks,
> > Nitty
> >
> > On Mon, Mar 13, 2023 at 9:16 AM NITTY BENNY 
> wrote:
> >
> >> Hi Chris,
> >>
> >> Sorry Chris, I am not able to reproduce the above issue.
> >>
> >> I want to share with you one more use case I found.
> >> The use case is in the first batch it returns 2 valid records and then
> in
> >> the next batch it is an invalid record.Below is the transaction_state
> >> topic, when I call a commit while processing an invalid record.
> >>
> >>
> connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
> >> producerId=11, producerEpoch=2, txnTimeoutMs=6, state=*Ongoing*,
> >> pendingState=None, topicPartitions=HashSet(streams-input-2),
> >> txnStartTimestamp=1678620463834, txnLastUpdateTimestamp=1678620463834)
> >>
> >> then after some time I saw the below states as well,
> >>
> >>
> connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
> >> producerId=11, producerEpoch=3, txnTimeoutMs=6,
> state=*PrepareAbort*,
> >> pendingState=None, topicPartitions=HashSet(streams-input-2),
> >> txnStartTimestamp=1678620463834, txnLastUpdateTimestamp=1678620526119)
> >>
> connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
> >> producerId=11, producerEpoch=3, txnTimeoutMs=6,
> state=*CompleteAbort*,
> >> pendingState=None, topicPartitions=HashSet(),
> >> txnStartTimestamp=1678620463834, txnLastUpdateTimestamp=1678620526121)
> >>
> >> Later for the next transaction, when it returns the first batch below is
> >> the logs I can see.
> >>
> >>  Transiting to abortable error state due to
> >> org.apache.kafka.common.errors.InvalidProducerEpochException: Producer
> >> attempted to produce with an old epoch.
> >> (org.apache.kafka.clients.producer.internals.TransactionManager)
> >> [kafka-producer-network-thread |
> >> connector-producer-json-sftp-source-connector-0]
> >> 2023-03-12 11:32:45,220 ERROR [json-sftp-source-connector|task-0]
> >> ExactlyOnceWorkerSourceTask{id=json-sftp-source-connector-0} failed to
> send
> >> record to streams-input:
> >> (org.apache.kafka.connect.runtime.AbstractWorkerSourceTask)
> >> [kafka-producer-network-thread |
> >> connector-producer-json-sftp-source-connector-0]
> >> org.apache.kafka.common.errors.InvalidProducerEpochException: Producer
> >> attempted to produce

Re: [ANNOUNCE] New Kafka PMC Member: David Arthur

2023-03-13 Thread Satish Duggana
Congratulations David!

On Mon, 13 Mar 2023 at 16:08, Rajini Sivaram 
wrote:

> Congratulations, David!
>
> Regards,
>
> Rajini
>
> On Mon, Mar 13, 2023 at 9:06 AM Bruno Cadonna  wrote:
>
> > Congrats, David!
> >
> > Bruno
> >
> > On 10.03.23 01:36, Matthias J. Sax wrote:
> > > Congrats!
> > >
> > > On 3/9/23 2:59 PM, José Armando García Sancio wrote:
> > >> Congrats David!
> > >>
> > >> On Thu, Mar 9, 2023 at 2:00 PM Kowshik Prakasam 
> > >> wrote:
> > >>>
> > >>> Congrats David!
> > >>>
> > >>> On Thu, Mar 9, 2023 at 12:09 PM Lucas Brutschy
> > >>>  wrote:
> > >>>
> >  Congratulations!
> > 
> >  On Thu, Mar 9, 2023 at 8:37 PM Manikumar  >
> >  wrote:
> > >
> > > Congrats David!
> > >
> > >
> > > On Fri, Mar 10, 2023 at 12:24 AM Josep Prat
> > >  > >
> > > wrote:
> > >>
> > >> Congrats David!
> > >>
> > >> ———
> > >> Josep Prat
> > >>
> > >> Aiven Deutschland GmbH
> > >>
> > >> Alexanderufer 3-7, 10117 Berlin
> > >>
> > >> Amtsgericht Charlottenburg, HRB 209739 B
> > >>
> > >> Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> > >>
> > >> m: +491715557497
> > >>
> > >> w: aiven.io
> > >>
> > >> e: josep.p...@aiven.io
> > >>
> > >> On Thu, Mar 9, 2023, 19:22 Mickael Maison <
> mickael.mai...@gmail.com
> > >
> > > wrote:
> > >>
> > >>> Congratulations David!
> > >>>
> > >>> On Thu, Mar 9, 2023 at 7:20 PM Chris Egerton
> > >>>  > >
> > >>> wrote:
> > 
> >  Congrats David!
> > 
> >  On Thu, Mar 9, 2023 at 1:17 PM Bill Bejeck 
> >  wrote:
> > 
> > > Congratulations David!
> > >
> > > On Thu, Mar 9, 2023 at 1:12 PM Jun Rao
>  > >
> > >>> wrote:
> > >
> > >> Hi, Everyone,
> > >>
> > >> David Arthur has been a Kafka committer since 2013. He has
> been
> > > very
> > >> instrumental to the community since becoming a committer. It's
> >  my
> > > pleasure
> > >> to announce that David is now a member of Kafka PMC.
> > >>
> > >> Congratulations David!
> > >>
> > >> Jun
> > >> on behalf of Apache Kafka PMC
> > >>
> > >
> > >>>
> > 
> > >>
> > >>
> > >>
> >
>


Re: Exactly once kafka connect query

2023-03-13 Thread NITTY BENNY
Hi Chris,

As you said, the below message is coming when I call an abort if there is
an invalid record, then for the next transaction I can see the below
message and then the connector will be stopped.
2023-03-13 14:28:26,043 INFO [json-sftp-source-connector|task-0] Aborting
transaction for batch as requested by connector
(org.apache.kafka.connect.runtime.ExactlyOnceWorkerSourceTask)
[task-thread-json-sftp-source-connector-0]
2023-03-13 14:28:26,043 INFO [json-sftp-source-connector|task-0] [Producer
clientId=connector-producer-json-sftp-source-connector-0,
transactionalId=connect-cluster-json-sftp-source-connector-0] Aborting
incomplete transaction (org.apache.kafka.clients.producer.KafkaProducer)
[task-thread-json-sftp-source-connector-0]

The issue with InvalidProducerEpoch is happening when I call the commit if
there is an invalid record, and in the next transaction I am getting
InvalidProducerEpoch Exception and the messages are copied in the previous
email. I don't know if this will also be fixed by your bug fix.I am using
kafka 3.3.1 version as of now.

Thanks,
Nitty


On Mon, Mar 13, 2023 at 10:47 AM NITTY BENNY  wrote:

> Hi Chris,
>
> The below mentioned issue is happening for Json connector only. Is there
> any difference with asn1,binary,csv and json connector?
>
> Thanks,
> Nitty
>
> On Mon, Mar 13, 2023 at 9:16 AM NITTY BENNY  wrote:
>
>> Hi Chris,
>>
>> Sorry Chris, I am not able to reproduce the above issue.
>>
>> I want to share with you one more use case I found.
>> The use case is in the first batch it returns 2 valid records and then in
>> the next batch it is an invalid record.Below is the transaction_state
>> topic, when I call a commit while processing an invalid record.
>>
>> connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
>> producerId=11, producerEpoch=2, txnTimeoutMs=6, state=*Ongoing*,
>> pendingState=None, topicPartitions=HashSet(streams-input-2),
>> txnStartTimestamp=1678620463834, txnLastUpdateTimestamp=1678620463834)
>>
>> then after some time I saw the below states as well,
>>
>> connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
>> producerId=11, producerEpoch=3, txnTimeoutMs=6, state=*PrepareAbort*,
>> pendingState=None, topicPartitions=HashSet(streams-input-2),
>> txnStartTimestamp=1678620463834, txnLastUpdateTimestamp=1678620526119)
>> connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
>> producerId=11, producerEpoch=3, txnTimeoutMs=6, state=*CompleteAbort*,
>> pendingState=None, topicPartitions=HashSet(),
>> txnStartTimestamp=1678620463834, txnLastUpdateTimestamp=1678620526121)
>>
>> Later for the next transaction, when it returns the first batch below is
>> the logs I can see.
>>
>>  Transiting to abortable error state due to
>> org.apache.kafka.common.errors.InvalidProducerEpochException: Producer
>> attempted to produce with an old epoch.
>> (org.apache.kafka.clients.producer.internals.TransactionManager)
>> [kafka-producer-network-thread |
>> connector-producer-json-sftp-source-connector-0]
>> 2023-03-12 11:32:45,220 ERROR [json-sftp-source-connector|task-0]
>> ExactlyOnceWorkerSourceTask{id=json-sftp-source-connector-0} failed to send
>> record to streams-input:
>> (org.apache.kafka.connect.runtime.AbstractWorkerSourceTask)
>> [kafka-producer-network-thread |
>> connector-producer-json-sftp-source-connector-0]
>> org.apache.kafka.common.errors.InvalidProducerEpochException: Producer
>> attempted to produce with an old epoch.
>> 2023-03-12 11:32:45,222 INFO [json-sftp-source-connector|task-0]
>> [Producer clientId=connector-producer-json-sftp-source-connector-0,
>> transactionalId=connect-cluster-json-sftp-source-connector-0] Transiting to
>> fatal error state due to
>> org.apache.kafka.common.errors.ProducerFencedException: There is a newer
>> producer with the same transactionalId which fences the current one.
>> (org.apache.kafka.clients.producer.internals.TransactionManager)
>> [kafka-producer-network-thread |
>> connector-producer-json-sftp-source-connector-0]
>> 2023-03-12 11:32:45,222 ERROR [json-sftp-source-connector|task-0]
>> [Producer clientId=connector-producer-json-sftp-source-connector-0,
>> transactionalId=connect-cluster-json-sftp-source-connector-0] Aborting
>> producer batches due to fatal error
>> (org.apache.kafka.clients.producer.internals.Sender)
>> [kafka-producer-network-thread |
>> connector-producer-json-sftp-source-connector-0]
>> org.apache.kafka.common.errors.ProducerFencedException: There is a newer
>> producer with the same transactionalId which fences the current one.
>> 2023-03-12 11:32:45,222 ERROR [json-sftp-source-connector|task-0]
>> ExactlyOnceWorkerSourceTask{id=json-sftp-source-connector-0} Failed to
>> flush offsets to storage:
>> (org.apache.kafka.connect.runtime.ExactlyOnceWorkerSou

Re: Exactly once kafka connect query

2023-03-13 Thread NITTY BENNY
Hi Chris,

The below mentioned issue is happening for Json connector only. Is there
any difference with asn1,binary,csv and json connector?

Thanks,
Nitty

On Mon, Mar 13, 2023 at 9:16 AM NITTY BENNY  wrote:

> Hi Chris,
>
> Sorry Chris, I am not able to reproduce the above issue.
>
> I want to share with you one more use case I found.
> The use case is in the first batch it returns 2 valid records and then in
> the next batch it is an invalid record.Below is the transaction_state
> topic, when I call a commit while processing an invalid record.
>
> connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
> producerId=11, producerEpoch=2, txnTimeoutMs=6, state=*Ongoing*,
> pendingState=None, topicPartitions=HashSet(streams-input-2),
> txnStartTimestamp=1678620463834, txnLastUpdateTimestamp=1678620463834)
>
> then after some time I saw the below states as well,
>
> connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
> producerId=11, producerEpoch=3, txnTimeoutMs=6, state=*PrepareAbort*,
> pendingState=None, topicPartitions=HashSet(streams-input-2),
> txnStartTimestamp=1678620463834, txnLastUpdateTimestamp=1678620526119)
> connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
> producerId=11, producerEpoch=3, txnTimeoutMs=6, state=*CompleteAbort*,
> pendingState=None, topicPartitions=HashSet(),
> txnStartTimestamp=1678620463834, txnLastUpdateTimestamp=1678620526121)
>
> Later for the next transaction, when it returns the first batch below is
> the logs I can see.
>
>  Transiting to abortable error state due to
> org.apache.kafka.common.errors.InvalidProducerEpochException: Producer
> attempted to produce with an old epoch.
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [kafka-producer-network-thread |
> connector-producer-json-sftp-source-connector-0]
> 2023-03-12 11:32:45,220 ERROR [json-sftp-source-connector|task-0]
> ExactlyOnceWorkerSourceTask{id=json-sftp-source-connector-0} failed to send
> record to streams-input:
> (org.apache.kafka.connect.runtime.AbstractWorkerSourceTask)
> [kafka-producer-network-thread |
> connector-producer-json-sftp-source-connector-0]
> org.apache.kafka.common.errors.InvalidProducerEpochException: Producer
> attempted to produce with an old epoch.
> 2023-03-12 11:32:45,222 INFO [json-sftp-source-connector|task-0] [Producer
> clientId=connector-producer-json-sftp-source-connector-0,
> transactionalId=connect-cluster-json-sftp-source-connector-0] Transiting to
> fatal error state due to
> org.apache.kafka.common.errors.ProducerFencedException: There is a newer
> producer with the same transactionalId which fences the current one.
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [kafka-producer-network-thread |
> connector-producer-json-sftp-source-connector-0]
> 2023-03-12 11:32:45,222 ERROR [json-sftp-source-connector|task-0]
> [Producer clientId=connector-producer-json-sftp-source-connector-0,
> transactionalId=connect-cluster-json-sftp-source-connector-0] Aborting
> producer batches due to fatal error
> (org.apache.kafka.clients.producer.internals.Sender)
> [kafka-producer-network-thread |
> connector-producer-json-sftp-source-connector-0]
> org.apache.kafka.common.errors.ProducerFencedException: There is a newer
> producer with the same transactionalId which fences the current one.
> 2023-03-12 11:32:45,222 ERROR [json-sftp-source-connector|task-0]
> ExactlyOnceWorkerSourceTask{id=json-sftp-source-connector-0} Failed to
> flush offsets to storage:
> (org.apache.kafka.connect.runtime.ExactlyOnceWorkerSourceTask)
> [kafka-producer-network-thread |
> connector-producer-json-sftp-source-connector-0]
> org.apache.kafka.common.errors.ProducerFencedException: There is a newer
> producer with the same transactionalId which fences the current one.
> 2023-03-12 11:32:45,224 ERROR [json-sftp-source-connector|task-0]
> ExactlyOnceWorkerSourceTask{id=json-sftp-source-connector-0} failed to send
> record to streams-input:
> (org.apache.kafka.connect.runtime.AbstractWorkerSourceTask)
> [kafka-producer-network-thread |
> connector-producer-json-sftp-source-connector-0]
> org.apache.kafka.common.errors.ProducerFencedException: There is a newer
> producer with the same transactionalId which fences the current one.
> 2023-03-12 11:32:45,222 ERROR [json-sftp-source-connector|task-0|offsets]
> ExactlyOnceWorkerSourceTask{id=json-sftp-source-connector-0} Failed to
> commit producer transaction
> (org.apache.kafka.connect.runtime.ExactlyOnceWorkerSourceTask)
> [task-thread-json-sftp-source-connector-0]
> org.apache.kafka.common.errors.ProducerFencedException: There is a newer
> producer with the same transactionalId which fences the current one.
> 2023-03-12 11:32:45,225 ERROR [json-sftp-source-connector|task-0]
> ExactlyOnceWo

Re: [ANNOUNCE] New Kafka PMC Member: David Arthur

2023-03-13 Thread Rajini Sivaram
Congratulations, David!

Regards,

Rajini

On Mon, Mar 13, 2023 at 9:06 AM Bruno Cadonna  wrote:

> Congrats, David!
>
> Bruno
>
> On 10.03.23 01:36, Matthias J. Sax wrote:
> > Congrats!
> >
> > On 3/9/23 2:59 PM, José Armando García Sancio wrote:
> >> Congrats David!
> >>
> >> On Thu, Mar 9, 2023 at 2:00 PM Kowshik Prakasam 
> >> wrote:
> >>>
> >>> Congrats David!
> >>>
> >>> On Thu, Mar 9, 2023 at 12:09 PM Lucas Brutschy
> >>>  wrote:
> >>>
>  Congratulations!
> 
>  On Thu, Mar 9, 2023 at 8:37 PM Manikumar 
>  wrote:
> >
> > Congrats David!
> >
> >
> > On Fri, Mar 10, 2023 at 12:24 AM Josep Prat
> >  >
> > wrote:
> >>
> >> Congrats David!
> >>
> >> ———
> >> Josep Prat
> >>
> >> Aiven Deutschland GmbH
> >>
> >> Alexanderufer 3-7, 10117 Berlin
> >>
> >> Amtsgericht Charlottenburg, HRB 209739 B
> >>
> >> Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> >>
> >> m: +491715557497
> >>
> >> w: aiven.io
> >>
> >> e: josep.p...@aiven.io
> >>
> >> On Thu, Mar 9, 2023, 19:22 Mickael Maison  >
> > wrote:
> >>
> >>> Congratulations David!
> >>>
> >>> On Thu, Mar 9, 2023 at 7:20 PM Chris Egerton
> >>>  >
> >>> wrote:
> 
>  Congrats David!
> 
>  On Thu, Mar 9, 2023 at 1:17 PM Bill Bejeck 
>  wrote:
> 
> > Congratulations David!
> >
> > On Thu, Mar 9, 2023 at 1:12 PM Jun Rao  >
> >>> wrote:
> >
> >> Hi, Everyone,
> >>
> >> David Arthur has been a Kafka committer since 2013. He has been
> > very
> >> instrumental to the community since becoming a committer. It's
>  my
> > pleasure
> >> to announce that David is now a member of Kafka PMC.
> >>
> >> Congratulations David!
> >>
> >> Jun
> >> on behalf of Apache Kafka PMC
> >>
> >
> >>>
> 
> >>
> >>
> >>
>


Re: Exactly once kafka connect query

2023-03-13 Thread NITTY BENNY
Hi Chris,

Sorry Chris, I am not able to reproduce the above issue.

I want to share with you one more use case I found.
The use case is in the first batch it returns 2 valid records and then in
the next batch it is an invalid record.Below is the transaction_state
topic, when I call a commit while processing an invalid record.

connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
producerId=11, producerEpoch=2, txnTimeoutMs=6, state=*Ongoing*,
pendingState=None, topicPartitions=HashSet(streams-input-2),
txnStartTimestamp=1678620463834, txnLastUpdateTimestamp=1678620463834)

then after some time I saw the below states as well,

connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
producerId=11, producerEpoch=3, txnTimeoutMs=6, state=*PrepareAbort*,
pendingState=None, topicPartitions=HashSet(streams-input-2),
txnStartTimestamp=1678620463834, txnLastUpdateTimestamp=1678620526119)
connect-cluster-json-sftp-source-connector-0::TransactionMetadata(transactionalId=connect-cluster-json-sftp-source-connector-0,
producerId=11, producerEpoch=3, txnTimeoutMs=6, state=*CompleteAbort*,
pendingState=None, topicPartitions=HashSet(),
txnStartTimestamp=1678620463834, txnLastUpdateTimestamp=1678620526121)

Later for the next transaction, when it returns the first batch below is
the logs I can see.

 Transiting to abortable error state due to
org.apache.kafka.common.errors.InvalidProducerEpochException: Producer
attempted to produce with an old epoch.
(org.apache.kafka.clients.producer.internals.TransactionManager)
[kafka-producer-network-thread |
connector-producer-json-sftp-source-connector-0]
2023-03-12 11:32:45,220 ERROR [json-sftp-source-connector|task-0]
ExactlyOnceWorkerSourceTask{id=json-sftp-source-connector-0} failed to send
record to streams-input:
(org.apache.kafka.connect.runtime.AbstractWorkerSourceTask)
[kafka-producer-network-thread |
connector-producer-json-sftp-source-connector-0]
org.apache.kafka.common.errors.InvalidProducerEpochException: Producer
attempted to produce with an old epoch.
2023-03-12 11:32:45,222 INFO [json-sftp-source-connector|task-0] [Producer
clientId=connector-producer-json-sftp-source-connector-0,
transactionalId=connect-cluster-json-sftp-source-connector-0] Transiting to
fatal error state due to
org.apache.kafka.common.errors.ProducerFencedException: There is a newer
producer with the same transactionalId which fences the current one.
(org.apache.kafka.clients.producer.internals.TransactionManager)
[kafka-producer-network-thread |
connector-producer-json-sftp-source-connector-0]
2023-03-12 11:32:45,222 ERROR [json-sftp-source-connector|task-0] [Producer
clientId=connector-producer-json-sftp-source-connector-0,
transactionalId=connect-cluster-json-sftp-source-connector-0] Aborting
producer batches due to fatal error
(org.apache.kafka.clients.producer.internals.Sender)
[kafka-producer-network-thread |
connector-producer-json-sftp-source-connector-0]
org.apache.kafka.common.errors.ProducerFencedException: There is a newer
producer with the same transactionalId which fences the current one.
2023-03-12 11:32:45,222 ERROR [json-sftp-source-connector|task-0]
ExactlyOnceWorkerSourceTask{id=json-sftp-source-connector-0} Failed to
flush offsets to storage:
(org.apache.kafka.connect.runtime.ExactlyOnceWorkerSourceTask)
[kafka-producer-network-thread |
connector-producer-json-sftp-source-connector-0]
org.apache.kafka.common.errors.ProducerFencedException: There is a newer
producer with the same transactionalId which fences the current one.
2023-03-12 11:32:45,224 ERROR [json-sftp-source-connector|task-0]
ExactlyOnceWorkerSourceTask{id=json-sftp-source-connector-0} failed to send
record to streams-input:
(org.apache.kafka.connect.runtime.AbstractWorkerSourceTask)
[kafka-producer-network-thread |
connector-producer-json-sftp-source-connector-0]
org.apache.kafka.common.errors.ProducerFencedException: There is a newer
producer with the same transactionalId which fences the current one.
2023-03-12 11:32:45,222 ERROR [json-sftp-source-connector|task-0|offsets]
ExactlyOnceWorkerSourceTask{id=json-sftp-source-connector-0} Failed to
commit producer transaction
(org.apache.kafka.connect.runtime.ExactlyOnceWorkerSourceTask)
[task-thread-json-sftp-source-connector-0]
org.apache.kafka.common.errors.ProducerFencedException: There is a newer
producer with the same transactionalId which fences the current one.
2023-03-12 11:32:45,225 ERROR [json-sftp-source-connector|task-0]
ExactlyOnceWorkerSourceTask{id=json-sftp-source-connector-0} Task threw an
uncaught and unrecoverable exception. Task is being killed and will not
recover until manually restarted
(org.apache.kafka.connect.runtime.WorkerTask)
[task-thread-json-sftp-source-connector-0]

Do you know why it is showing an abort state even if I call commit?

I tested one more scenario, When I call the co

Re: [ANNOUNCE] New Kafka PMC Member: David Arthur

2023-03-13 Thread Bruno Cadonna

Congrats, David!

Bruno

On 10.03.23 01:36, Matthias J. Sax wrote:

Congrats!

On 3/9/23 2:59 PM, José Armando García Sancio wrote:

Congrats David!

On Thu, Mar 9, 2023 at 2:00 PM Kowshik Prakasam  
wrote:


Congrats David!

On Thu, Mar 9, 2023 at 12:09 PM Lucas Brutschy
 wrote:


Congratulations!

On Thu, Mar 9, 2023 at 8:37 PM Manikumar 
wrote:


Congrats David!


On Fri, Mar 10, 2023 at 12:24 AM Josep Prat 


wrote:


Congrats David!

———
Josep Prat

Aiven Deutschland GmbH

Alexanderufer 3-7, 10117 Berlin

Amtsgericht Charlottenburg, HRB 209739 B

Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen

m: +491715557497

w: aiven.io

e: josep.p...@aiven.io

On Thu, Mar 9, 2023, 19:22 Mickael Maison 

wrote:



Congratulations David!

On Thu, Mar 9, 2023 at 7:20 PM Chris Egerton 



wrote:


Congrats David!

On Thu, Mar 9, 2023 at 1:17 PM Bill Bejeck 

wrote:



Congratulations David!

On Thu, Mar 9, 2023 at 1:12 PM Jun Rao 


wrote:



Hi, Everyone,

David Arthur has been a Kafka committer since 2013. He has been

very

instrumental to the community since becoming a committer. It's

my

pleasure

to announce that David is now a member of Kafka PMC.

Congratulations David!

Jun
on behalf of Apache Kafka PMC