I added the config “auto.offset.reset” = “earliest” to the consumer and it receives the messages. So I need to cleanup any persistent data after I do KafkaServer.shutdown()? How do I do that?
On 31/08/2018, 11:30, "Cristian Petroaca" <cpetro...@fitbit.com.INVALID> wrote: Some more weird things: 1. I readded the auto.create.topics.enable=true but now I added the default.replication.factor=1 and now I did not get the error anymore and consumer is polling. But then when I removed “default.replication.factor” the consumer still worked without seeing the group coordinator error. 2. The producer sends 10 messages. After I solved the coordinator issue and I see the consumer poll() I also see these messages right at the beginning: DEBUG [kafka-consumer -0] (LogContext.java:183) - [Consumer clientId=dev.consumer_0, groupId=dev.consumer] Fetching committed offsets for partitions: [xxxx-0] DEBUG [kafka-consumer -0] (LogContext.java:183) - [Consumer clientId=dev.consumer_ 0, groupId=dev.consumer] Found no committed offset for partition xxxx-0 DEBUG [kafka-consumer -0] (LogContext.java:189) - [Consumer clientId=dev.consumer _0, groupId=dev.consumer] Committed offset 10 for partition dev.site.api_analytics-0 So it seems that the broker properties are persisted beyond runtime and when I programmatically recreate a KafkaServer somehow it gets the properties from before. Does that make sense? As for the comsumer, it’s not clear why is the consumer committing offset 10 right at the beginning even though it did not receive any messages from poll(). Perhaps that’s why it does not consume messages. On 31/08/2018, 10:12, "Cristian Petroaca" <cpetro...@fitbit.com.INVALID> wrote: Poll timeout is 1000 ms. On 30/08/2018, 18:28, "M. Manna" <manme...@gmail.com> wrote: What is your poll time for consumers poll() method? On Thu, 30 Aug 2018, 16:23 Cristian Petroaca, <cpetro...@fitbit.com.invalid> wrote: > Ok, so I mixed things up a little. > I started with the kafka Server being configured to auto create topics. > That gave the error. > But turning the auto create off and creating the topic with AdminUtils > does not show the error and the consumer actually polls for messages. > I did not modify the “default.replication.factor” for auto created topics > and that has as default 1. So I’m not sure why I would see the error in the > first place? > > Even though I don’t see the error anymore and my consumer polls for > messages, it does not receive any messages. I am waiting a reasonable > amount of time (1min) after the producer created the messages. > An independent console consumer connected to the same borker and topic > does receive them. > My consumer config does not seem exotic as in to create such a situation. > Any reason for not receiving messages? > > Thanks > > On 30/08/2018, 11:25, "Cristian Petroaca" <cpetro...@fitbit.com.INVALID> > wrote: > > Yes. > In my programmatic env I first create it with: > AdminUtils.createTopic(zkUtils, topic, 1, 1, new Properties(), > RackAwareMode.Enforced$.MODULE$); > So partitions = 1 and replication = 1. > > The same for the remote broker, I created the topic –partitions 1 > –replication-factor 1 > > Are there any other reasons for that error? > > On 29/08/2018, 18:06, "M. Manna" <manme...@gmail.com> wrote: > > Does the topic exist in both your programmatic broker and remote > broker? > > > Also, are the topic settings same for partitions and replication > factor? > GROUP_COORDINATOR_NOT_AVAILABLE is enforced as of 0.11.x if the > auto-created topic partition/replication-factor setup doesn't match > with server's config. So you might want to check all these. > > Regards, > > On Wed, 29 Aug 2018 at 15:55, Cristian Petroaca > <cpetro...@fitbit.com.invalid> wrote: > > > Tried it, same problem with 9092. > > By the way, the same consumer works with a remote 1.0.1 Kafka > broker with > > the same config. > > There doesn’t seem to be any networking issues with the embedded > one since > > the consumer successfully sends Find Coordinator messages to it > and the > > broker responds with Coordinator not found. > > > > > > On 29/08/2018, 17:46, "M. Manna" <manme...@gmail.com> wrote: > > > > So have you tried binding it to 9092 rather than randomising > it, and > > see if > > that makes any difference? > > > > On Wed, 29 Aug 2018 at 15:41, Cristian Petroaca > > <cpetro...@fitbit.com.invalid> wrote: > > > > > Port = 0 means Kafka will start listening on a random port > which I > > need. > > > I tried it with 5000 but I get the same result. > > > > > > > > > On 29/08/2018, 16:46, "M. Manna" <manme...@gmail.com> > wrote: > > > > > > Can you extend the auto.commit.interval.ms to 5000 ? > and retry? > > Also, > > > why > > > is your port set to 0? > > > > > > Regards, > > > > > > On Wed, 29 Aug 2018 at 14:25, Cristian Petroaca > > > <cpetro...@fitbit.com.invalid> wrote: > > > > > > > Hi, > > > > > > > > I’m using the Kafka lib with version 2.11_1.0.1. > > > > I use the KafkaServer.scala class to > programmatically create a > > Kafka > > > > instance and connect it to a programmatically > created Zookeeper > > > instance. > > > > It has the following properties: > > > > host.name", "127.0.0.1" > > > > "port", "0" > > > > "zookeeper.connect", "127.0.0.1:" + zooKeeperPort > > > > "broker.id", "1" > > > > auto.create.topics.enable", "true" > > > > "delete.topic.enable", "true" > > > > > > > > I then create a new Kafka Consumer with the following > > properties: > > > > bootstrap.servers", “127.0.0.1” + kafkaPort > > > > "auto.commit.interval.ms", "10" > > > > “client_id”, “xxxx” > > > > “enable.auto.commit”, “true” > > > > “auto.commit.interval.ms”, “10” > > > > > > > > My problem is that after I subscribe the consumer to > a custom > > topic, > > > the > > > > consumer just blocks in the .poll() method and I see > a lot of > > > messages like: > > > > “Group coordinator lookup failed: The coordinator is > not > > available.” > > > > > > > > I read on another forum that a possible problem is > that the > > > > _consumer_offsets topic doesn’t exist but that’s not > the case > > for me. > > > > > > > > Can you suggest a possible root cause? > > > > > > > > Thanks, > > > > Cristian > > > > > > > > > > > > > > > > > > > > > > > >