I have logic in my service to capture exceptions being thrown during
message processing and produce a new message to a different topic with
information about the error. The idea is to leave the message unmodified,
aka produce the exact same bytes to this new topic, therefore I'm planning
on adding
I don't understand why the kafka-transactions cli requires a topic and
partition when a tx could be spawning not only across multiple partitions
but topics.
BTW, when I list the transactions with kafka-transactions list I'm getting
some with state EMPTY, this means the transaction was started by a
I did the following test that allowed me to introduce a duplicate message
in the output topic.
1. Client A starts the consumer and the producer and holds a reference to
the current groupMetadata wich has generation.id -1 since the consumer
didn't join the group yet
2. Client A joins the group
sense?
>
>
> -Matthias
>
> On 5/27/22 10:43 AM, Gabriel Giussi wrote:
> > The docs say
> > "This exception indicates that the broker received an unexpected sequence
> > number from the producer, which means that data may have been lost. If
> the
fenced.
>
> I think we may overlooked it in documentation to emphasize that, in case
> 1), it should not expect ProducerFencedException. If so, we can fix the
> javadoc.
>
>
>
>
> On Tue, May 31, 2022 at 8:26 AM Gabriel Giussi
> wrote:
>
> > But there is no
t means the producer has
> been fenced and hence should be closed.
>
> So in that step 4, the old producer with Client A should be closed within
> the rebalance callback, and then one can create a new producer to pair with
> the re-joined consumer.
>
> On Tue, May 24, 2022 at 1:3
The docs say
"This exception indicates that the broker received an unexpected sequence
number from the producer, which means that data may have been lost. If the
producer is configured for idempotence only (i.e. if enable.idempotence is
set and no transactional.id is configured), it is possible to
This is the scenario I have in mind
1. Client A gets assigned partitions P1 and P2.
2. Client A polls a message with offset X from P1, opens a transaction and
produces to some output topic.
3. Client B joins the group and gets assigned P2
4. Client A tries to sendOffsets with group metadata but
I'd like to remind
> you about. For example, if two producers could be (mistakenly) created with
> different txn.ids and are paired with the same consumer, then the new API
> in KIP-447 would not fence one of them.
>
> Guozhang
>
> On Tue, May 24, 2022 at 5:50 AM Gabriel Giussi
&g
; not crucial, but if your have a N-1 producer-consumer mapping then you may
> still need to configure that id.
>
>
> Guozhang
>
>
>
> On Fri, May 20, 2022 at 8:39 AM Gabriel Giussi
> wrote:
>
> > Before KIP-447 I understood the use of transactional.id to prevent
Before KIP-447 I understood the use of transactional.id to prevent us from
zombies introducing duplicates, as explained in this talk
https://youtu.be/j0l_zUhQaTc?t=822.
So in order to get zombie fencing working correctly we should assign
producers with a transactional.id that included the
nutes, you will see
rebalacing.
2018-04-05 19:01 GMT-03:00 Scott Thibault <scott.thiba...@multiscalehn.com>:
> No, there is only one consumer in the group.
>
>
> On Thu, Apr 5, 2018 at 2:39 PM, Gabriel Giussi <gabrielgiu...@gmail.com>
> wrote:
>
> > The
There is some other consumer (in the same process or another) using the
same group.id?
2018-04-05 14:36 GMT-03:00 Scott Thibault :
> I'm using the Kafka 1.0.1 Java client with 1 consumer and 1 partition and
> using the ConsumerRebalanceListener I can see that the
Kafka brokers version: 0.11.0.0
Kafka client version: 0.11.0.2
If we have two KafkaConsumer using the same group.id (running in the same
process or in two different processes) and one of them is closed, it
triggers a rebalance in the other KafkaConsumer even if they were
subscribed to different
lients/
> consumer/MockConsumer.java#L164
>
>
>
> On Mon, Feb 19, 2018 at 4:39 AM, Gabriel Giussi <gabrielgiu...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I'm trying to use MockConsumer to test my application code but I've
> faced a
> > couple of lim
Hi,
I'm trying to use MockConsumer to test my application code but I've faced a
couple of limitations and I want to know if there are workarounds or
something that I'm overlooking.
Note: I'm using kafka-clients v 0.11.0.2
1. Why the addRecord
16 matches
Mail list logo