Poll timeout is 1000 ms.

On 30/08/2018, 18:28, "M. Manna" <manme...@gmail.com> wrote:

    What is your poll time for consumers poll() method?
    
    
    
    On Thu, 30 Aug 2018, 16:23 Cristian Petroaca, <cpetro...@fitbit.com.invalid>
    wrote:
    
    > Ok, so I mixed things up a little.
    > I started with the kafka Server being configured to auto create topics.
    > That gave the error.
    > But turning the auto create off and creating the topic with AdminUtils
    > does not show the error and the consumer actually polls for messages.
    > I did not modify the “default.replication.factor” for auto created topics
    > and that has as default 1. So I’m not sure why I would see the error in 
the
    > first place?
    >
    > Even though I don’t see the error anymore and my consumer polls for
    > messages, it does not receive any messages. I am waiting a reasonable
    > amount of time (1min) after the producer created the messages.
    > An independent console consumer connected to the same borker and topic
    > does receive them.
    > My consumer config does not seem exotic as in to create such a situation.
    > Any reason for not receiving messages?
    >
    > Thanks
    >
    > On 30/08/2018, 11:25, "Cristian Petroaca" <cpetro...@fitbit.com.INVALID>
    > wrote:
    >
    >     Yes.
    >     In my programmatic env I first create it with:
    >     AdminUtils.createTopic(zkUtils, topic, 1, 1, new Properties(),
    > RackAwareMode.Enforced$.MODULE$);
    >     So partitions = 1 and replication = 1.
    >
    >     The same for the remote broker, I created the topic –partitions 1
    > –replication-factor 1
    >
    >     Are there any other reasons for that error?
    >
    >     On 29/08/2018, 18:06, "M. Manna" <manme...@gmail.com> wrote:
    >
    >         Does the topic exist in both your programmatic broker and remote
    > broker?
    >
    >
    >         Also, are the topic settings same for partitions and replication
    > factor?
    >         GROUP_COORDINATOR_NOT_AVAILABLE is enforced as of 0.11.x if the
    >         auto-created topic partition/replication-factor setup doesn't 
match
    >         with server's config. So you might want to check all these.
    >
    >         Regards,
    >
    >         On Wed, 29 Aug 2018 at 15:55, Cristian Petroaca
    >         <cpetro...@fitbit.com.invalid> wrote:
    >
    >         > Tried it, same problem with 9092.
    >         > By the way, the same consumer works with a remote 1.0.1 Kafka
    > broker with
    >         > the same config.
    >         > There doesn’t seem to be any networking issues with the embedded
    > one since
    >         > the consumer successfully sends Find Coordinator messages to it
    > and the
    >         > broker responds with Coordinator not found.
    >         >
    >         >
    >         > On 29/08/2018, 17:46, "M. Manna" <manme...@gmail.com> wrote:
    >         >
    >         >     So have you tried binding it to 9092 rather than randomising
    > it, and
    >         > see if
    >         >     that makes any difference?
    >         >
    >         >     On Wed, 29 Aug 2018 at 15:41, Cristian Petroaca
    >         >     <cpetro...@fitbit.com.invalid> wrote:
    >         >
    >         >     > Port = 0 means Kafka will start listening on a random port
    > which I
    >         > need.
    >         >     > I tried it with 5000 but I get the same result.
    >         >     >
    >         >     >
    >         >     > On 29/08/2018, 16:46, "M. Manna" <manme...@gmail.com>
    > wrote:
    >         >     >
    >         >     >     Can you extend the auto.commit.interval.ms to 5000 ?
    > and retry?
    >         > Also,
    >         >     > why
    >         >     >     is your port set to 0?
    >         >     >
    >         >     >     Regards,
    >         >     >
    >         >     >     On Wed, 29 Aug 2018 at 14:25, Cristian Petroaca
    >         >     >     <cpetro...@fitbit.com.invalid> wrote:
    >         >     >
    >         >     >     > Hi,
    >         >     >     >
    >         >     >     > I’m using the Kafka lib with version 2.11_1.0.1.
    >         >     >     > I use the KafkaServer.scala class to
    > programmatically create a
    >         > Kafka
    >         >     >     > instance and connect it to a programmatically
    > created Zookeeper
    >         >     > instance.
    >         >     >     > It has the following properties:
    >         >     >     > host.name", "127.0.0.1"
    >         >     >     > "port", "0"
    >         >     >     > "zookeeper.connect", "127.0.0.1:" + zooKeeperPort
    >         >     >     > "broker.id", "1"
    >         >     >     > auto.create.topics.enable", "true"
    >         >     >     > "delete.topic.enable", "true"
    >         >     >     >
    >         >     >     > I then create a new Kafka Consumer with the 
following
    >         > properties:
    >         >     >     > bootstrap.servers", “127.0.0.1” + kafkaPort
    >         >     >     > "auto.commit.interval.ms", "10"
    >         >     >     > “client_id”, “xxxx”
    >         >     >     > “enable.auto.commit”, “true”
    >         >     >     > “auto.commit.interval.ms”, “10”
    >         >     >     >
    >         >     >     > My problem is that after I subscribe the consumer to
    > a custom
    >         > topic,
    >         >     > the
    >         >     >     > consumer just blocks in the .poll() method and I see
    > a lot of
    >         >     > messages like:
    >         >     >     > “Group coordinator lookup failed: The coordinator is
    > not
    >         > available.”
    >         >     >     >
    >         >     >     > I read on another forum that a possible problem is
    > that the
    >         >     >     > _consumer_offsets topic doesn’t exist but that’s not
    > the case
    >         > for me.
    >         >     >     >
    >         >     >     > Can you suggest a possible root cause?
    >         >     >     >
    >         >     >     > Thanks,
    >         >     >     > Cristian
    >         >     >     >
    >         >     >
    >         >     >
    >         >     >
    >         >
    >         >
    >         >
    >
    >
    >
    >
    >
    

Reply via email to