Not too familiar with that error, but I do have Kafka working on Kubernetes. I'll share my files here in case that helps:
Zookeeper: https://gist.github.com/aliakhtar/812974c35cf2658022fca55cc83f4b1d Kafka: https://gist.github.com/aliakhtar/724fbee6910dec7263ab70332386af33 Essentially I have 3 kafka nodes and 3 zookeeper nodes, and my hacky way of getting this to work was to have 3 kafka deployments + services, and vice versa for zookeeper. Ideally you would use a StatefulSet for this, but zookeeper and kafka require a unique id for each node to be provided in the config, and there's no way currently to do that in Kubernetes (or wasn't, last I checked). If there was, e.g using the pod IP, then you'd use a StatefulSet with a valueFrom of the pod's ip, and pass that on as the unique ID to each node. On Tue, Aug 22, 2017 at 7:28 PM, Sean McElroy <sean.mcelroy1...@gmail.com> wrote: > I'm not sure this is the correct place to post this question, but anyway... > > When running kafka in kubernetes, the kafka config contains this: > > listeners = PLAINTEXT://:tcp://10.0.0.186:9092 > > Which is leading to this error: No security protocol defined for listener > PLAINTEXT://:TCP > > Here is the section of the kubernetes yaml file that defines kafka: > > - image: wurstmeister/kafka > name: kafka > volumeMounts: > - name: kafka-vol > mountPath: /var/run/docker.sock > env: > - name: KAFKA_ADVERTISED_HOST_NAME > valueFrom: > fieldRef: > fieldPath: status.podIP > - name: KAFKA_ADVERTISED_PORT > value: "9092" > - name: KAFKA_ZOOKEEPER_CONNECT > value: localhost:2181 > - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP > value: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT > - name: KAFKA_ADVERTISED_PROTOCOL_NAME > value: OUTSIDE > - name: KAFKA_PROTOCOL_NAME > value: INSIDE > ports: > - containerPort: 9092 > > Can anyone see what I'm doing wrong? > > Thanks >