It was meaningless, just the copy of the first record.
in the real world, the key is bigint type and the primary key of a record.
I simplify the real program to make the code smaller.
if the transactional message has the problem.
the kafka streaming is based on the transactional message,
there are
bq. record = new ProducerRecord<>("test", 0, (long)0,
Long.toString(0));
What was the rationale of passing 0 as the third parameter in the second
transaction ?
Cheers
On Wed, Dec 20, 2017 at 5:08 AM, HKT wrote:
> Hi, I have runned the program over 10 hours! It doesn't stop.
> It ge
Hi, I have runned the program over 10 hours! It doesn't stop.
It generate 800+MB log, but almost of them is
2017-12-20 20:04:28 [kafka-producer-network-thread | producer-1] DEBUG
o.a.k.c.producer.internals.Sender - [Producer clientId=producer-1,
transactionalId=hello] Sending transactional reque
For the server log, is it possible to enable DEBUG logging ?
Thanks
On Mon, Dec 18, 2017 at 4:35 PM, HKT wrote:
> Thanks for reply.
>
> here is the client side log:
>
> 2017-12-19 08:26:08 [main] DEBUG o.a.k.c.p.i.TransactionManager -
> [Producer clientId=producer-1, transactionalId=hello] Tran
Can you capture stack trace on the broker and pastebin it ?
Broker log may also provide some clue.
Thanks
On Mon, Dec 18, 2017 at 4:46 AM, HKT wrote:
> Hello,
>
> I was testing the transactional message on kafka.
> but I get a problem.
> the producer always blocking at second commitTransaction
Hello,
I was testing the transactional message on kafka.
but I get a problem.
the producer always blocking at second commitTransaction.
Here is my code:
Properties kafkaProps = new Properties();
kafkaProps.setProperty("bootstrap.servers", "localhost:9092");
kafkaProps.set