One more thing was checking my Kafka-server.log its fill with the warning

Attempting to send response via channel for which there is no open
connection, connection id 2 (Kafka.network.Processor)

IS this the reason for the above issue? How to resolve this. Need help
production is breaking.


Regards,
Abhimanyu

On Thu, Nov 16, 2017 at 5:08 PM, Abhimanyu Nagrath <
abhimanyunagr...@gmail.com> wrote:

> Hi, I am using a single node Kafka V 0.10.2 (16 GB RAM, 8 cores) and a
> single node zookeeper V  3.4.9 (4 GB RAM, 1 core ). I am having 64 consumer
> groups and 500 topics each having 250 partitions. I am able to execute the
> commands which require only Kafka broker and its running fine
> ex.
>
> > ./kafka-consumer-groups.sh --bootstrap-server localhost:9092
> > --describe --group <topic>
>
> But when I execute the admin command like create topic, alter topic For
> example
>
> > ./kafka-topics.sh --create --zookeeper <zookeeper>:2181
> > --replication-factor 1 --partitions 1 --topic <topic>
>
> Following exception is being displayed:
>
>
>
> > Error while executing topic command : replication factor: 1 larger
> > than available brokers: 0 [2017-11-16 11:22:13,592] ERROR
> > org.apache.kafka.common.errors.InvalidReplicationFactorException:
> > replication factor: 1 larger than available brokers: 0
> > (kafka.admin.TopicCommand$)
>
> I checked my broker is up. In server.log following warnings are there
>
>     [2017-11-16 11:14:26,959] WARN Client session timed out, have not
> heard from server in 15843ms for sessionid 0x15aa7f586e1c061
> (org.apache.zookeeper.ClientCnxn)
>     [2017-11-16 11:14:28,795] WARN Unable to reconnect to ZooKeeper
> service, session 0x15aa7f586e1c061 has expired (org.apache.zookeeper.
> ClientCnxn)
>     [2017-11-16 11:21:46,055] WARN Unable to reconnect to ZooKeeper
> service, session 0x15aa7f586e1c067 has expired (org.apache.zookeeper.
> ClientCnxn)
>
> Below mentioned is my Kafka server configuration :
>
>     broker.id=1
>     delete.topic.enable=true
>     num.network.threads=3
>     num.io.threads=8
>     socket.send.buffer.bytes=102400
>     socket.receive.buffer.bytes=102400
>     socket.request.max.bytes=104857600
>     log.dirs=/kafka/data/logs
>     num.partitions=1
>     log.segment.bytes=1073741824
>     log.retention.check.interval.ms=300000
>     zookeeper.connect=<zookeeperIP>:2181
>     zookeeper.connection.timeout.ms=6000
>
> Zookeeper Configuration is :
>
>     # The number of milliseconds of each tick
>     tickTime=2000
>     # The number of ticks that the initial
>     # synchronization phase can take
>     initLimit=10
>     # The number of ticks that can pass between
>     # sending a request and getting an acknowledgement
>     syncLimit=5
>     # the directory where the snapshot is stored.
>     # do not use /tmp for storage, /tmp here is just
>     # example sakes.
>     dataDir=/zookeeper/data
>     # the port at which the clients will connect
>     clientPort=2181
>     # the maximum number of client connections.
>     # increase this if you need to handle more clients
>     #maxClientCnxns=60
>     autopurge.snapRetainCount=20
>     # Purge task interval in hours
>     # Set to "0" to disable auto purge feature
>     autopurge.purgeInterval=48
>
> I am not able to figure out which configuration to tune. What I am missing
> .Any help will be appreciated.
>
>
>
>
>  Regards,
> Abhimanyu
>

Reply via email to