Jorg Heymans created KAFKA-8203:
-----------------------------------

             Summary: plaintext connections to SSL secured broker can be 
handled more elegantly
                 Key: KAFKA-8203
                 URL: https://issues.apache.org/jira/browse/KAFKA-8203
             Project: Kafka
          Issue Type: Improvement
    Affects Versions: 2.1.1
            Reporter: Jorg Heymans


Mailing list thread: 
[https://lists.apache.org/thread.html/39935157351c0ad590e6cf02027816d664f1fd3724a25c1133a3bba6@%3Cusers.kafka.apache.org%3E]

-----reproduced here

We have our brokers secured with these standard properties

 
{code:java}
listeners=SSL://a.b.c:9030 
ssl.truststore.location=... 
ssl.truststore.password=... 
ssl.keystore.location=... 
ssl.keystore.password=... 
ssl.key.password=... 
ssl.client.auth=required 
ssl.enabled.protocols=TLSv1.2 {code}
It's a bit surprising to see that when a (java) client attempts to connect 
without having SSL configured, so doing a PLAINTEXT connection instead, it does 
not get a TLS exception indicating that SSL is required. Somehow i would have 
expected a hard transport-level exception making it clear that non-SSL 
connections are not allowed, instead the client sees this (when debug logging 
is enabled)


{code:java}
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 
21234bee31165527 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - 
[Consumer clientId=consumer-1, groupId=my-test-group] Kafka consumer 
initialized [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - 
[Consumer clientId=consumer-1, groupId=my-test-group] Subscribed to topic(s): 
events [main] DEBUG 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer 
clientId=consumer-1, groupId=my-test-group] Sending FindCoordinator request to 
broker a.b.c:9030 (id: -1 rack: null) [main] DEBUG 
org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, 
groupId=my-test-group] Initiating connection to node a.b.c:9030 (id: -1 rack: 
null) using address /a.b.c [main] DEBUG org.apache.kafka.common.metrics.Metrics 
- Added sensor with name node--1.bytes-sent [main] DEBUG 
org.apache.kafka.common.metrics.Metrics - Added sensor with name 
node--1.bytes-received [main] DEBUG org.apache.kafka.common.metrics.Metrics - 
Added sensor with name node--1.latency [main] DEBUG 
org.apache.kafka.common.network.Selector - [Consumer clientId=consumer-1, 
groupId=my-test-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 
131072, SO_TIMEOUT = 0 to node -1 [main] DEBUG 
org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, 
groupId=my-test-group] Completed connection to node -1. Fetching API versions. 
[main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer 
clientId=consumer-1, groupId=my-test-group] Initiating API versions fetch from 
node -1. [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer 
clientId=consumer-1, groupId=my-test-group] Connection with /a.b.c disconnected 
java.io.EOFException at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:119)
 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:381) 
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:342) at 
org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609) at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541) 
at org.apache.kafka.common.network.Selector.poll(Selector.java:467) at 
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535) at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
 at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:231)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:316)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1214)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1179) 
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1164) 
at eu.europa.ec.han.TestConsumer.main(TestConsumer.java:22) [main] DEBUG 
org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, 
groupId=my-test-group] Node -1 disconnected. [main] DEBUG 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient - [Consumer 
clientId=consumer-1, groupId=my-test-group] Cancelled request with header 
RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=2, clientId=consumer-1, 
correlationId=0) due to node -1 being disconnected [main] DEBUG 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer 
clientId=consumer-1, groupId=my-test-group] Coordinator discovery failed, 
refreshing metadata [main] DEBUG org.apache.kafka.clients.NetworkClient - 
[Consumer clientId=consumer-1, groupId=my-test-group] Give up sending metadata 
request since no node is available [main] DEBUG 
org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, 
groupId=my-test-group] Give up sending metadata request since no node is 
available [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer 
clientId=consumer-1, groupId=my-test-group] Give up sending metadata request 
since no node is available [main] DEBUG org.apache.kafka.clients.NetworkClient 
- [Consumer clientId=consumer-1, groupId=my-test-group] Initialize connection 
to node a.b.c:9030 (id: -1 rack: null) for sending metadata request [main] 
DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, 
groupId=my-test-group] Initiating connection to node a.b.c:9030 (id: -1 rack: 
null) using address /a.b.c
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to