Team,
I am getting below error when try to use kafka-run-class.sh
kafka.tools.GetOffsetShell in SASL enabled cluster. I set KAFKA_OPTS with path
for jaas file. could you please help me what is reason for this?
Kafka Version : 2.3.0
[2020-01-31 00:55:12,934] WARN [Consumer clientId=GetOffsetShe
Hey Upendra,
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools
The above should guide you through the reassignment of partitions/replicas.
Also, you should read about
offset.topic.num.partitions
offset.topic.replication.factor
I hope this helps you.
Regards,
On Thu, 30 Ja
Hi Brandon,
Which version of Kafka are the consumers running? My understanding is that if
they're running a version lower than the brokers then they could be using a
different format for the messages which means the brokers have to convert each
record before sending to the consumer.
Thanks,
Jam
>> I really don't know what TOPOLOGY_OPTIMIZATION is for.
If you enable optimization, Kafka Streams tries to generate a more
efficient Topology when translating the DSL program. We are working on
some more documentation of this feature atm:
https://cwiki.apache.org/confluence/display/KAFKA/DSL+Opt
Only streams specific props I am using are:
props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
props.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 1000);
props.put(StreamsConfig.TOPOLOGY_OPTIMIZATION, StreamsConfig.OPTIMIZE);
Yes there are three sub topologies and key does change betwee
Your program does not seem to contain a loop. Hence, it's unclear to me
atm what could be the issue.
Does the application commit offset on the input topic, what would be an
indicator that it actually does make progress.
Do you change the key between joins, ie, is there multiple
sub-topologies tha
Hi,
In a case I have found that when I define my topology using streams DSL it
tends to go into infinite loop.
This usually happens if I start my stream and shut it down and restart it
again after a while (by that time source topic has moved ahead).
Stream processing seems to be stuck in a loop an
What you say is correct. This is a severe bug rendering standby tasks
useless for in-memory state stores. Can you pleas open a ticket so we
can fix it asap?
-Matthias
On 1/27/20 6:05 AM, Igor Danis wrote:
> Hi all,
>
> I have question about kafka-streams, particularly in-memory state-store
> (/
Hi,
We had a small cluster (4 brokers) dealing with very low throughput - a couple
hundred messages per minute at the very most. In that cluster we had a little
under 3300 total consumers (all were kafka streams instances). All broker CPUs
were maxed out almost consistently for a few weeks.
We
Hello all,
As said in in the title after re-installing kafka on cloudera cluster we
have problem with consuming data on topic, we still can product on topic
but nothing displayed on consumers.
we think the problem is coming from leader election because we have many
errors on the log brokers who
Also, want to clarify one more doubt,
is there any way for the client to explicitly trigger a rebalance without
dying itself?
On Thu, Jan 30, 2020 at 7:54 PM Devaki, Srinivas
wrote:
> Hi All,
>
> We have a set of logstash consumer groups running under the same set of
> instances, we have decide
Hi All,
We have a set of logstash consumer groups running under the same set of
instances, we have decided to run separate consumer groups subscribing
multiple topics instead of running single consumer group for all topics(the
reasoning behind this decision is because of how our elasticsearch clus
Hi Team,
Is there way to change ISR for existing topics.
I want this for user topics as well as for __consumer_offset topic.
By mistake, __consumer_offset topic was configured with 1 replication
factor and 1 ISR.
kafka broker and client version: 0.10.0.1
Thanks,
Upendra
13 matches
Mail list logo