You need to pass "security.protocol" config using ----producer.config or
--consumer.config command-line options.
Only java clients supports security. You need use "--new-consumer"  option
for kafka-console-consumer.sh

also need to setup producer/consumer scripts to use jaas conf using
-Djava.security.auth.login.config system property.
or newer clients support "sasl.jaas.config" config property to configure
jaas settings

http://kafka.apache.org/documentation/#security_kerberos_sasl_clientconfig

On Fri, Dec 1, 2017 at 3:48 PM, 李书明 <alemmont...@126.com> wrote:

> 1.Version
>
>
> Kafka version: 0.10.0.0
>
>
> 2.Config
> #config/server.properties
> advertised.listeners=SASL_PLAINTEXT://hzadg-mammut-
> platform2.server.163.org:6667
> listeners=SASL_PLAINTEXT://hzadg-mammut-platform2.server.163.org:6667
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> auto.create.topics.enable=true
> auto.leader.rebalance.enable=true
> broker.id=0
> compression.type=producer
> controlled.shutdown.enable=true
> controlled.shutdown.max.retries=3
> controlled.shutdown.retry.backoff.ms=5000
> controller.message.queue.size=10
> controller.socket.timeout.ms=30000
> log.cleanup.interval.mins=10
> log.dirs=/srv/nbs/0/kafka-logs
> log.index.interval.bytes=4096
>
>
> #config/kafka_jaas.conf
> KafkaServer {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> keyTab="/etc/security/keytabs/kafka.service.keytab"
> storeKey=true
> useTicketCache=false
> serviceName="kafka"
> principal="kafka/hzadg-mammut-platform2.server.163....@bdms.163.com";
> };
> KafkaClient {
> com.sun.security.auth.module.Krb5LoginModule required
> useTicketCache=true
> renewTicket=true
> serviceName="kafka";
> };
> Client {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> keyTab="/etc/security/keytabs/kafka.service.keytab"
> storeKey=true
> useTicketCache=false
> serviceName="zookeeper"
> principal="kafka/hzadg-mammut-platform2.server.163....@bdms.163.com";
>
>
> config/consumer.properties
> zookeeper.connect=127.0.0.1:2181
>
>
> # timeout in ms for connecting to zookeeper
> zookeeper.connection.timeout.ms=6000
>
>
> #consumer group id
> group.id=test-consumer-group
>
>
> #consumer timeout
> #consumer.timeout.ms=5000
>
>
> 3.Exceptions
>
>
> # set config/tools-log4j.properties -> DEBUG
>
>
> #./bin/kafka-topics.sh --zookeeper hzadg-mammut-platform2.server.
> 163.org:2181,hzadg-mammut-platform3.server.163.org:2181,
> hzadg-mammut-platform4.server.163.org:2181   --topic test1  --create
> --partitions 1 --replication-factor  1
> ok
>
>
> #./bin/kafka-console-producer.sh --broker-list
> hzadg-mammut-platform2.server.163.org:6667 --topic test1
>
>
> [2017-12-01 18:11:39,428] DEBUG Node -1 disconnected.
> (org.apache.kafka.clients.NetworkClient)
> [2017-12-01 18:11:39,429] WARN Bootstrap broker
> hzadg-mammut-platform2.server.163.org:6667 disconnected
> (org.apache.kafka.clients.NetworkClient)
> [2017-12-01 18:11:39,429] DEBUG Give up sending metadata request since no
> node is available (org.apache.kafka.clients.NetworkClient)
> [2017-12-01 18:11:39,529] DEBUG Initialize connection to node -1 for
> sending metadata request (org.apache.kafka.clients.NetworkClient)
> [2017-12-01 18:11:39,529] DEBUG Initiating connection to node -1 at
> hzadg-mammut-platform2.server.163.org:6667. (org.apache.kafka.clients.
> NetworkClient)
> [2017-12-01 18:11:39,530] DEBUG Completed connection to node -1
> (org.apache.kafka.clients.NetworkClient)
> [2017-12-01 18:11:39,530] DEBUG Sending metadata request {topics=[test1]}
> to node -1 (org.apache.kafka.clients.NetworkClient)
> [2017-12-01 18:11:39,531] DEBUG Connection with
> hzadg-mammut-platform2.server.163.org/10.201.168.136 disconnected
> (org.apache.kafka.common.network.Selector)
> java.io.EOFException
>         at org.apache.kafka.common.network.NetworkReceive.
> readFromReadableChannel(NetworkReceive.java:83)
>         at org.apache.kafka.common.network.NetworkReceive.
> readFrom(NetworkReceive.java:71)
>         at org.apache.kafka.common.network.KafkaChannel.receive(
> KafkaChannel.java:154)
>         at org.apache.kafka.common.network.KafkaChannel.read(
> KafkaChannel.java:135)
>         at org.apache.kafka.common.network.Selector.
> pollSelectionKeys(Selector.java:323)
>         at org.apache.kafka.common.network.Selector.poll(
> Selector.java:283)
>         at org.apache.kafka.clients.NetworkClient.poll(
> NetworkClient.java:260)
>         at org.apache.kafka.clients.producer.internals.Sender.run(
> Sender.java:229)
>         at org.apache.kafka.clients.producer.internals.Sender.run(
> Sender.java:134)
>         at java.lang.Thread.run(Thread.java:748)
> [2017-12-01 18:11:39,532] DEBUG Node -1 disconnected.
> (org.apache.kafka.clients.NetworkClient)
> [2017-12-01 18:11:39,533] WARN Bootstrap broker
> hzadg-mammut-platform2.server.163.org:6667 disconnected
> (org.apache.kafka.clients.NetworkClient)
>
>
> #bin/kafka-console-consumer.sh --zookeeper hzadg-mammut-platform2.server.
> 163.org:2181,hzadg-mammut-platform3.server.163.org:2181,
> hzadg-mammut-platform4.server.163.org:2181   --topic test1
> --from-beginning --consumer.config=config/consumer.properties
>
>
> [2017-12-01 18:12:44,956] INFO [ConsumerFetcherManager-1512123164274]
> Added fetcher for partitions ArrayBuffer() (kafka.consumer.
> ConsumerFetcherManager)
> [2017-12-01 18:12:45,158] DEBUG Reading reply sessionid:0x360117d95990006,
> packet:: clientPath:null serverPath:null finished:false header:: 34,8
> replyHeader:: 34,21474836636,0  request:: '/brokers/ids,F  response::
> v{'0}  (org.apache.zookeeper.ClientCnxn)
> [2017-12-01 18:12:45,160] DEBUG Reading reply sessionid:0x360117d95990006,
> packet:: clientPath:null serverPath:null finished:false header:: 35,4
> replyHeader:: 35,21474836636,0  request:: '/brokers/ids/0,F  response:: #
> 7b226a6d785f706f7274223a2d312c2274696d657374616d70223a223135
> 3132313232303734343630222c22656e64706f696e7473223a5b22534153
> 4c5f504c41494e544558543a2f2f687a6164672d6d616d6d75742d706c61
> 74666f726d322e7365727665722e3136332e6f72673a36363637225d2c22
> 686f7374223a6e756c6c2c2276657273696f6e223a332c22706f7274223a
> 2d317d,s{21474836584,21474836584,1512122074465,1512122074465,0,0,0,
> 99098422877814784,153,0,21474836584}  (org.apache.zookeeper.ClientCnxn)
> [2017-12-01 18:12:45,171] WARN [test-consumer-group_hzadg-
> mammut-platform2.server.163.org-1512123164257-c4d5f1f1-leader-finder-thread],
> Failed to find leader for Set([test1,0]) (kafka.consumer.
> ConsumerFetcherManager$LeaderFinderThread)
> kafka.common.BrokerEndPointNotAvailableException: End point with security
> protocol PLAINTEXT not found for broker 0
>         at kafka.cluster.Broker$$anonfun$5.apply(Broker.scala:131)
>         at kafka.cluster.Broker$$anonfun$5.apply(Broker.scala:131)
>         at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
>     at scala.collection.AbstractMap.getOrElse(Map.scala:59)
>         at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:130)
>   at kafka.utils.ZkUtils$$anonfun$getAllBrokerEndPointsForChanne
> l$1.apply(ZkUtils.scala:166)
>         at 
> kafka.utils.ZkUtils$$anonfun$getAllBrokerEndPointsForChannel$1.apply(ZkUtils.scala:166)
>       at scala.collection.TraversableLike$$anonfun$map$
> 1.apply(TraversableLike.scala:234)
>         at scala.collection.TraversableLike$$anonfun$map$
> 1.apply(TraversableLike.scala:234)        at scala.collection.mutable.
> ResizableArray$class.foreach(ResizableArray.scala:59)
>         at scala.collection.mutable.ArrayBuffer.foreach(
> ArrayBuffer.scala:48)
>         at scala.collection.TraversableLike$class.map(
> TraversableLike.scala:234)
>         at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>         at kafka.utils.ZkUtils.getAllBrokerEndPointsForChanne
> l(ZkUtils.scala:166)
>         at kafka.consumer.ConsumerFetcherManager$
> LeaderFinderThread.doWork(ConsumerFetcherManager.scala:65)
>         at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
>
>
> 4.Questions
>
>
> Can somebody help me to look at my problem?
>
>
>  Is there any config I missed ?
>
>
> Thanks you very much !

Reply via email to