Nodes with a negative id refer to the bootstrap servers you configured the
client with.
There are also metrics that report for nodes with an extremely large node
id. These are usually Integer.MAX_VALUE - (coordinator node id)
On Tue, Dec 3, 2019 at 1:37 PM Rajkumar Natarajan
wrote:
> I've have
Hi ,
Is it possible to auto scale Kafka? If it is not directly supported, then is
there automated way of adding brokers and perform other tasks like partition
rebalancing.
Regards,
Akash D Goel
Digital & Business Integration Manager
This message is for the de
Testing on staging shows that a restart on exception is much faster and the
stream starts right away which I think means we're reading way less data
than before!
What I was referring to is that, in Streams, the keys for window
> aggregation state is actually composed of both the window itself and
yeah, we tried for this a while back (kip 388 -
https://cwiki.apache.org/confluence/display/KAFKA/KIP-388%3A+Add+observer+interface+to+record+request+and+response)
its implemented in our kafka repo (linked to above)
On Tue, Dec 3, 2019 at 8:59 PM Ignacio Solis wrote:
>
> At LinkedIn we run a sty
I've have kafka producer running same code from 3 different linux servers
sending messages to same kafka cluster topic. Below is the sample code -
KafkaProducer producer = new KafkaProducer(props);
Map metrics = producer.metrics();
System.out.println(metrics.keySet());
The sample of
Oh, yeah, I remember that conversation!
Yes, then, I agree, if you're only storing state of the most recent window for
each key, and the key you use for that state is actually the key of the
records, then an aggressive compaction policy plus your custom transformer
seems like a good way forward
At LinkedIn we run a style of "read-only" interceptor we call an observer.
We use this for usage monitoring.
https://github.com/linkedin/kafka/commit/a378c8980af16e3c6d3f6550868ac0fd5a58682e
There is always a tension between exposing internals, creating stable
interfaces and performance. It's und
Hi John,
afaik grace period uses stream time
https://kafka.apache.org/21/javadoc/org/apache/kafka/streams/kstream/Windows.html
which is
per partition, unfortunately we process data that's not in sync between
keys so each key needs to be independent and a key can have much older data
than the other
Hey Alessandro,
That sounds also like it would work. I'm wondering if it would actually change
what you observe w.r.t. recovery behavior, though. Streams already sets the
retention time on the changelog to equal the retention time of the windows, for
windowed aggregations, so you shouldn't be l
Thanks John for the explanation,
I thought that with EOS enabled (which we have) it would in the worst case
find a valid checkpoint and start the restore from there until it reached
the last committed status, not completely from scratch. What you say
definitely makes sense now.
Since we don't real
Hello, guys,
I have one topic with 40 partitions. And 40 consumers process messages from
this topic, and the logic are the same, of course.
But I find one partition the consumer never catch up, even when there is no
more new message. And It seems that this partition is fixed.
It is odd, and I do
Thanks Eric for sharing this test script. It simplifies the process for me.
https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh
I am in the process of setting up my local environment to start running the
tests on 2.4.0 RC2 as well and I noticed the following artifacts are being
v
Hello folks,
This is a kind reminder of the Bay Area Kafka® meetup this Thursday (Dec.
5th) 5:30pm, at Confluent's new Mountain View HQ office.
*RSVP and Register* (if you intend to attend in person):
https://www.meetup.com/KafkaBayArea/events/266327152/
*Date*
5:30pm, Thursday, December 5th, 2
Hi Alessandro,
To take a stab at your question, maybe it first doesn't find it, but then
restores some data, writes the checkpoint, and then later on, it has to
re-initialize the task for some reason, and that's why it does find a
checkpoint then?
More to the heart of the issue, if you have EO
Hi John,
thanks a lot for helping, regarding your message:
- no we only have 1 instance of the stream application, and it always
re-uses the same state folder
- yes we're seeing most issues when restarting not gracefully due exception
I've enabled trace logging and filtering by a single state s
As mentioned above, the issues fixed in 2.2.2 are listed in the
https://www.apache.org/dist/kafka/2.2.2/RELEASE_NOTES.html.
On Tue, Dec 3, 2019 at 7:20 AM Sachin Mittal wrote:
> Does this release has fix for that critical windows bug of not able to
> delete topics?
> If not then under which rele
Hi,
Thank you all for reporting the issues.
We'll consider the below JIRAs as blockers for 2.4.0 release.
https://issues.apache.org/jira/browse/KAFKA-9231
https://issues.apache.org/jira/browse/KAFKA-9258
https://issues.apache.org/jira/browse/KAFKA-9156
I will create new release candidate after m
The main challenge is doing this without exposing a bunch of internal
classes. I haven't seen a proposal that handles that aspect well so far.
Ismael
On Tue, Dec 3, 2019 at 7:21 AM Sönke Liebau
wrote:
> Hi Thomas,
>
> I think that idea is worth looking at. As you say, if no interceptor is
> con
Hi Vasily,
Probably in this case, with the constraints you’re providing, the first branch
would output first, but I wouldn’t depend on it. Any small change in your
program could mess this up, and also any change in Streams could alter the
exact execution order also.
The right way to think abo
Hello,
I wonder if ordering of the messages is preserved by kafka streams when the
messages are processes by the same sub-topology without redistribution and in
the end there are multiple sinks for the same topic.
I couldn't find the answer to this question in the docs/mailing list/stack
over
Hi Murilo,
For this case, you don’t have to worry. Kafka Streams provides the guarantee
you want by default.
Let us know if you want/need more information!
Cheers,
John
On Tue, Dec 3, 2019, at 08:59, Murilo Tavares wrote:
> Hi Mathias
> Thank you for your feedback.
> I'm still a bit confused
Hi Thomas,
I think that idea is worth looking at. As you say, if no interceptor is
configured then the performance overhead should be negligible. Basically it
is then up to the user to decide if he wants tomtake the performance hit.
We should make sure to think about monitoring capabilities like t
Hi Mathias
Thank you for your feedback.
I'm still a bit confused about what approach one should take. My
KafkaStreams application is pretty standard for KafkaStreams: it takes a
few Table-like topics, group and aggregates some of them so we can join
with others. Something like this:
KTable left =
Does this release has fix for that critical windows bug of not able to
delete topics?
If not then under which release we can expect the same.
Thanks and Regards
Sachin
On Mon, Dec 2, 2019 at 5:55 PM Karolis Pocius
wrote:
> 2.2.2 is a bugfix release, it contains some of the fixes from 2.3.0/1,
Hi Team,
I'm getting below exception in one of the kafka broker while restarting.
But server is starting and serving properly.
Why this error is coming?
Is there any severe issue due to this?
And how to resolve it?
ERROR [Group Metadata Manager on Broker 3]: Error in loading offsets from
[__cons
Hi M. Manna,
Thank you for your feedback, any and all thoughts on this are appreciated
from the community.
I think it is important to distinguish that there are two parts to this.
One would be a server side interceptor framework and the other would be
the interceptor implementations themselves
That is correct. It depends on what guarantees you need though. Also
note, that producers ofter write into repartitions topics to re-key data
and for this case, no ordering guarantee can be provided anyway, as the
single writer principle is "violated".
Also note, that Kafka Streams can handle out-
27 matches
Mail list logo