Sorry for not being clear. Kafka ships with a tool, that allows to get a
dump of topic content in human readable form (ie, what is store on disk
in a topic).
You can call it via `bin/kafka-run-class.sh kafka.tools.DumpLogSegments`
and need to point it to the corresponding segment files. For exampl
Matthias,
I will try to get you the broker logs. As far as the partitions of
__consumer_offsets topic, I am not quite clear on what you want me to
provide. Are you wanting the output of
./kafka-topics.sh --zookeeper zk-host --describe --topic __consumer_offsets
-russ
On Fri, Feb 16, 2018 at 1:4
Ok. Let us know if bouncing works.
Sounds like a bug thought, that the consumer group is not cleared. It
would be helpful to get the broker logs (of the node that hosts the
coordinator). Can you also provide a dump of the corresponding
partitions of __consumer_offsets topic?
Thanks a lot.
-Matth
Matthias,
The session.timeout.ms is set to 1. It has been in this "weird" state
now for going on 24 hours.
We are going to try to bounce the group coordinator.
Thanks,
-russ
On Fri, Feb 16, 2018 at 12:18 PM, Matthias J. Sax
wrote:
> Hi,
>
> this "weird" state can happen. Usually, a consume
Hi,
this "weird" state can happen. Usually, a consumer sends a "leave group
request" to inform the broker it's shutting down. However, this is a
send-and-forget approach and if the message is lost, the broker might
think that a stopped consumer is still alive. For this reason, the
broker should ev
You should upgrade to a never version. 0.10.2 has some issues on broker
restart that got fix in later versions.
Note: you don't need to upgrade your brokers for this. Since 0.10.1,
Kafka Streams is backward compatible to older brokers:
https://docs.confluent.io/current/streams/upgrade-guide.html#c
Can you check the committed offsets using bin/kafka-consumer-group.sh ?
Also inspect your consumer's position via KafkaConsumer#position() to
see where the consumer actually is in the topic.
-Matthias
On 2/16/18 5:13 AM, Debraj Manna wrote:
> I have posted the same question in stackoverflow al
Hi There,
What is the process to recover from a scenario where Kafka service is
down due to one of the logs dir disk is full? We can delete the topic
partitions on the directory since many of the topics are not required. Any
input is much appreciated.
Thanks,
Biju
Hi,
I hope you treat this with appropriate urgency. A while back, I raised pull
request which was simply a patch for Windows OS. The pull request can be
found here:
https://github.com/apache/kafka/pull/3838/
Even though test failure markers are there, those aren't relevant to my
commit. And as s
I have posted the same question in stackoverflow also. But I have not got
any reply there also
https://stackoverflow.com/questions/48826279/kafka-0-10-java-consumer-not-reading-message-from-topic
On Fri, Feb 16, 2018 at 5:23 PM, Debraj Manna
wrote:
> I have a simple java producer like below
>
>
more like amazon supporting multiple-devices from multiple-distribution-sources
than mom & pop?
get a CA (certificate authentication) provider ..CA providers protect websites
for a living..here is a list
https://www.pluralsight.com/blog/software-development/top-reliable-ssl-certificates
[https
I have a simple java producer like below
public class Producer
{
private final static String TOPIC = "my-example-topi8";
private final static String BOOTSTRAP_SERVERS = "localhost:8092";
public static void main( String[] args ) throws Exception {
Producer producer = createProducer
Ran tests from source and quickstart with binaries
+1 (non-binding)
On Fri, Feb 16, 2018 at 6:05 AM, Jason Gustafson wrote:
> +1. Verified artifacts and ran quickstart. Thanks Ewen!
>
> -Jason
>
> On Thu, Feb 15, 2018 at 1:42 AM, Rajini Sivaram
> wrote:
>
>> +1
>>
>> Ran quickstart with binarie
Hey Guys,
I am wondering if there is way to get messages through put of Kafka
brokers.
Ex:
1) Number of messages sent per day to a Broker/Cluster
2) Number of messages consumed per day by a Broker/Cluster
It should be cumulative from all topics on a cluster. Also, count shouldn't
consider repli
We are using kafka version 0.10.2.
Original message From: "Matthias J. Sax"
Date: 2/14/18 4:00 PM (GMT-05:00) To:
users@kafka.apache.org Subject: Re: Store not ready
What version to you use?
Kafka Streams should be able to keep running while you restart you
brokers. If not,
15 matches
Mail list logo