parties = ports * On Thu, Sep 14, 2017 at 8:04 PM, Ali Akhtar <ali.rac...@gmail.com> wrote:
> I would try to put the SSL on different ports than what you're sending > kafka to. Make sure the kafka ports don't do anything except communicate in > plaintext, put all 3rd parties on different parties. > > > On Thu, Sep 14, 2017 at 7:23 PM, Yongtao You <yongtao_...@yahoo.com> > wrote: > >> Does the following message mean broker 6 is having trouble talking to >> broker 7? Broker 6's advertised listener is "PLAINTEXT://nginx:9906" and >> Broker 7's advertised listener is "PLAINTEXT://nginx:9907". However, on >> nginx server, port 9906 and 9907 are both SSL ports because that's what >> producers (filebeat) send data to and they need to be encrypted. >> >> >> [2017-09-14 21:59:32,543] WARN [Controller-6-to-broker-7-send-thread]: >> Controller 6 epoch 1 fails to send request (type: UpdateMetadataRequest=, >> controllerId=6, controllerEpoch=1, partitionStates={}, liveBrokers=(id=6, >> endPoints=(host=nginx, port=9906, listenerName=ListenerName(PLAINTEXT), >> securityProtocol=PLAINTEXT), rack=null), (id=7, endPoints=(host=nginx, >> port=9907, listenerName=ListenerName(PLAINTEXT), >> securityProtocol=PLAINTEXT), rack=null)) to broker nginx:9907 (id: 7 rack: >> null). Reconnecting to broker. (kafka.controller.RequestSendThread) >> java.io.IOException: Connection to 7 was disconnected before the response >> was read >> at org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(N >> etworkClientUtils.java:93) >> at kafka.controller.RequestSendThread.doWork(ControllerChannelM >> anager.scala:225) >> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64) >> >> >> >> >> On Thursday, September 14, 2017, 9:42:58 PM GMT+8, Yongtao You >> <yongtao_...@yahoo.com.INVALID> wrote: >> >> >> You are correct, that error message was a result of my misconfiguration. >> I've corrected that. Although filebeat still can't send messages to Kafka. >> In the Nginx log, I see the following: >> >> 2017/09/14 21:35:09 [info] 4030#4030: *60056 SSL_do_handshake() failed >> (SSL: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown >> protocol) while SSL handshaking, client: 172.16.16.101, server: >> 0.0.0.0:9907 >> >> >> where 172.16.16.101 is the host where one of the two Kafka brokers is >> running. Looks like it tries to connect to port 9907 which is where the >> other Kafka broker listens on. It's an [info] message so I'm not sure how >> serious it is, but I don't see messages sent from filebeat in Kafka. :( >> >> Thanks! >> -Yongtao >> >> On Thursday, September 14, 2017, 8:31:31 PM GMT+8, Ali Akhtar < >> ali.rac...@gmail.com> wrote: >> >> If you ssh to the server where you got this error, are you able to ping >> the >> ip of node 7 on the port its trying to reach? >> >> On Thu, Sep 14, 2017 at 5:20 PM, Yongtao You <yongtao_...@yahoo.com> >> wrote: >> >> > I'm getting a lot of these in the server.log: >> > >> > >> > [2017-09-14 20:18:32,753] WARN Connection to node 7 could not be >> > established. Broker may not be available. (org.apache.kafka.clients. >> > NetworkClient) >> > >> > >> > where node 7 is another broker in the cluster. >> > >> > >> > Thanks. >> > >> > -Yongtao >> > >> > >> > On Thursday, September 14, 2017, 8:13:09 PM GMT+8, Yongtao You < >> > yongtao_...@yahoo.com> wrote: >> > >> > >> > I got errors saying the other brokers are not reachable, or something >> like >> > that. Let me dig up the exact error messages. I am guessing the problem >> was >> > that the advertised listeners are of PLAINTEXT format, but the Nginx >> > requires SSL. But I could be wrong. >> > >> > >> > Thanks! >> > >> > -Yongtao >> > >> > >> > On Thursday, September 14, 2017, 8:07:38 PM GMT+8, Ali Akhtar < >> > ali.rac...@gmail.com> wrote: >> > >> > >> > How do you know that the brokers don't talk to each other? >> > >> > On Thu, Sep 14, 2017 at 4:32 PM, Yongtao You < >> > yongtao_...@yahoo.com.invalid> >> > wrote: >> > >> > > Hi, >> > > I would like to know the right way to setup a Kafka cluster with >> Nginx in >> > > front of it as a reverse proxy. Let's say I have 2 Kafka brokers >> running >> > on >> > > 2 different hosts; and an Nginx server running on another host. Nginx >> > will >> > > listen on 2 different ports, and each will forward to one Kafka >> broker. >> > > Producers will connect to one of the 2 ports on the Nginx host. >> > > Nginx-Host: listens on 9000 ssl (forward to <kafka-host-0>:9092 in >> plain >> > > text); 9001 ssl (forward to <kafka-host-1>:9092 in plain text); >> > > >> > > Kafka-Host-0: listeners=PLAINTEXT://<kafka-host-0-ip>:9092; >> > > advertised.listeners=PLAINTEXT://<nginx-host-ip>:9000Kafka-Host-1: >> > > listeners=PLAINTEXT://<kafka-host-1-ip>:9092; advertised.listeners= >> > > PLAINTEXT://<nginx-host-ip>:9001 >> > > Ports on Nginx will have SSL enabled so that messages sent from >> producers >> > > to Nginx will be encrypted; Traffic between Nginx and Kafka are in >> plain >> > > text since it's on the internal network. >> > > Why have producers go through Nginx? The main reason is that producers >> > > will only need to open their firewall to a single IP so that even >> later >> > on >> > > when I add another Kafka broker, I don't need to modify the firewall >> of >> > all >> > > the producers. >> > > My problem is that I can't make the above setup work. Brokers are >> unable >> > > to talk to one another. :( >> > > So, what's the right way to do this? Anyone has experience setting up >> > > something similar? Or any recommendations for a different setup that >> will >> > > not require changes on the producer's side when new Kafka brokers are >> > added? >> > > >> > > Thanks!Yongtao >> > > PS. The producers in question are Filebeats (https://www.elastic.co/ >> > > products/beats/filebeat). >> > > >> > >> > >