I figured it out :).
After playing with various sync/async variants of the above example, it's
quite simple: I needed to get off the callback thread in
`Producer#send(topic, callback)` before making the second send call. To do
this, I just changed `thenCompose()` to `thenComposeAsync()`. Probably
I just posted a new message with all of this distilled into a simple test
and log output.
On Sat, Apr 29, 2017 at 6:53 PM, Dmitry Minkovsky
wrote:
> It appears—at least according to debug logs—that the metadata request is
> sent after the metadata update times out:
>
>
>
I am attempting to send messages to two topics with a newly created
producer.
The first message sends fine, but for some reason, the producer does not
fetch metadata for the second topic before attempting to send. So sending
to the second topic fails. The producer fetches metadata for the second
Running inside a Docker container introduced a slight wrinkle. It’s easily
resolvable. This line from the controller log file was the tip off:
java.io.IOException: Connection to 64174aa85d04:9092 (id: 2 rack: null) failed
64174aa85d04 was a container ID of one of the other brokers. All the
Ah. Sorry.
You are right. Nevertheless, you can set an non-null dummy value like
`byte[0]` instead of the actual "tuple" to not blow up your storage
requirement.
-Matthias
On 4/30/17 10:24 AM, Michal Borowiecki wrote:
> Apologies, I must have not made myself clear.
>
> I meant the values in
Yes you understand correctly that batch == request
-hans
> On Apr 30, 2017, at 11:58 AM, Petr Novak wrote:
>
> Thank you a lot.
>
> How requests in max.in.flight.requests.per.connection relates to batches? 1
> request precisely means 1 batch? It would make sense if I
Thank you a lot.
How requests in max.in.flight.requests.per.connection relates to batches? 1
request precisely means 1 batch? It would make sense if I think about it. Just
to ensure I understand correctly.
Petr
From: Michal Borowiecki [mailto:michal.borowie...@openbet.com]
Sent: 30.
Yes, that's what the docs say in both places:
max.in.flight.requests.per.connection The maximum number of
unacknowledged requests the client will send on a single connection
before blocking. Note that if this setting is set to be greater than 1
and there are failed sends, there is a risk of
Does this mean that if the client have retry > 0 and
max.in.flight.requests.per.connection > 1, then even if the topic only have one
partition, there’s still no guarantee of the ordering?
Thanks,
Jun
> On Apr 30, 2017, at 7:57 AM, Hans Jespersen wrote:
>
> There is a
Apologies, I must have not made myself clear.
I meant the values in the records coming from the input topic (which in
turn are coming from kafka connect in the example at hand)
and not the records coming out of the join.
My intention was to warn against sending null values from kafka connect
Your observation is correct.
If you use inner KStream-KTable join, the join will implement the
filter automatically as the join will not return any result.
-Matthias
On 4/30/17 7:23 AM, Michal Borowiecki wrote:
> I have something working on the same principle (except not using
> connect),
Hi Everyone,
I have a problem and I hope one of you can help me figuring it out.
One of our kafka-streams processes stopped processing messages
When I turn on debug log I see lots of these messages:
2017-04-30 15:42:20,228 [StreamThread-1] DEBUG o.a.k.c.c.i.Fetcher: Sending
fetch for partitions
Ah, yes, you're right. I miss-read it.
My bad. Apologies.
Michal
On 30/04/17 16:02, Svante Karlsson wrote:
@michal
My interpretation is that he's running 2 instances of zookeeper - not
6. (1 on the "4 broker machine" and one on the other)
I'm not sure where that leaves you in zookeeper
@michal
My interpretation is that he's running 2 instances of zookeeper - not 6. (1
on the "4 broker machine" and one on the other)
I'm not sure where that leaves you in zookeeper land - ie if you happen to
have a timeout between the two zookeepers will you be out of service or
will you have a
There is a parameter that controls this behavior called max.in.
flight.requests.per.connection
If you set max.in. flight.requests.per.connection = 1 then the producer waits
until previous produce requests returns a response before sending the next one
(or retrying). The retries parameter
Hi Chuck,
Are you running zookeepers in the same containers as Kafka brokers?
Kafka brokers should be able to communicate with any of the zookeepers
and, more importantly, zookeepers need to be able to talk to each-other.
Therefore, the zookeeper port should be exposed too (2181 by default),
I have something working on the same principle (except not using
connect), that is, I put ids to filter on into a ktable and then (inner)
join a kstream with that ktable.
I don't believe the value can be null though. In a changlog null value
is interpreted as a delete so won't be put into a
Hi Jan,
Correct. As I said before it's not common or recommended practice to run
an even number, and I wouldn't recommend it myself. I hope it didn't
sound as if I did.
However, I don't see how this would cause the issue at hand unless at
least 3 out of the 6 zookeepers died, but that could
http://kafka.apache.org/documentation.html#topic-config
Check this.
You can use *--alter *
To override/add the default config.
retention.ms can be used to set topic level config.
For internal topics I suppose you need to provide a topic config map before
creating internal topics.
Example:
I looked this up yesterday when I read the grandparent, as my old
company ran two and I needed to know.
Your link is a bit ambiguous but it has a link to the zookeeper
Getting Started guide which says this:
"
For replicated mode, a minimum of three servers are required, and it
is strongly
Hi
Where can I find what is the Kafka streams internal topic data retention
time and how to change it
Thanks,
Shimi
Hello,
Does Kafka producer waits till previous batch returns response before
sending next one? Do I assume correctly that it does not when retries can
change ordering?
Hence batches delay is introduced only by producer internal send loop time
and linger?
If a timeout would be localized
Svante, I don't share your opinion.
Having an even number of zookeepers is not a problem in itself, it
simply means you don't get any better resilience than if you had one
fewer instance.
Yes, it's not common or recommended practice, but you are allowed to
have an even number of zookeepers and
23 matches
Mail list logo