How to use CRL (Certificate Revocation List) with Kafka

2021-08-24 Thread Darshan
Hi
We have a private CA and our Kafka Brokers are signed by a private CA.
Bunch of external clients connect to our broker and before connecting they
download the private CA's cert and add it to truststore. Everything works
fine.

On the Kafka broker side, we want to use CRL before we authenticate any
client. Just wondering how we can use the CRL or OCSP (Online Certificate
Status Protocol) with Kafka ? I couldn't find any documentation  around it,
so I thought of asking the community.

Any help would be appreciated.

Thanks.
--Darshan


Dynamic Loading of Truststore Issue

2020-03-04 Thread Darshan
Hi

We are on Kafka 1.1.1. We add bunch of new entries (say ~ 10 new entries)
in truststore and restart for Kafka to read the truststore file. Everything
works fine.

We wanted to move to Kafka 2.0.x to get this new features, wherein we can
dynamically remove something from truststore. Let's say, we want to remove
1 entry from truststore, this feature works fine. But if we restart Kafka,
all previously added 9 entries don't work any more. Is this by design ?

I also saw that in Kafka 1.1.1, when we added bunch of new entries in
truststore, the file size of truststore went up. But in Kafka 2.0.1, the
truststore file stays constant.

Can someone please comment:
1. If the issue that we are seeing is by design ?
2. Do we need to add to keystore all entries every-time upon Kafka restart ?

Thanks.


Re: Help - Updating Keystore Dynamically - KAFKA-6810

2019-05-16 Thread Darshan
I sent another email that I am looking to dynamically update SSL
truststore, and not keystore. Would that be still relevant? Thanks.

On Thu, May 16, 2019 at 2:54 PM Peter Bukowinski  wrote:

> It’s my understanding that dynamic configuration requires you to write
> znodes, e.g. /config/brokers/ssl.keystore.location. I believe you can use
> the same path. Brokers should be watching that path and if a node is added
> or updated the config values will be read in and loaded over existing
> values.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-226+-+Dynamic+Broker+Configuration#KIP-226-DynamicBrokerConfiguration-SSLkeystore
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-226+-+Dynamic+Broker+Configuration#KIP-226-DynamicBrokerConfiguration-SSLkeystore
> >
>
>
> > On May 16, 2019, at 2:08 PM, Darshan 
> wrote:
> >
> > Hi
> >
> > I am testing out Kafka 2.2.0 and was hoping to test out "Enable dynamic
> > reconfiguration of SSL truststores"
> > https://issues.apache.org/jira/browse/KAFKA-6810. But unfortunately I
> could
> > not get it work. Please find the server.properties. Just wondering if we
> > need an change of config. Please advise..
> >
> > 1. I added a new entry in the truststore, and validated it that it is
> > present.
> > 2. The client (kafka writer) could not write to Kafka due to
> SSLException.
> > 3. I restarted Kafka broker.
> > 4. The client could write messages.
> >
> >
> > server.properties
> >
> 
> >
> > # Server Basics #
> >
> > # The id of the broker. This must be set to a unique integer for each
> > broker.
> > broker.id=1
> > auto.create.topics.enable=true
> > delete.topic.enable=true
> >
> >  Upgrading from 1.1.0 to 2.2.0 
> > inter.broker.protocol.version=1.1
> > log.message.format.version=1.1
> >
> > # Socket Server Settings
> > #
> >
> > listeners=INTERNAL://1.1.1.65:9092,EXTERNAL://10.28.118.172:443
> > ,INTERNAL_PLAINTEXT://1.1.1.65:9094
> > advertised.listeners=INTERNAL://1.1.1.65:9092,EXTERNAL://
> 10.28.118.172:443
> > ,INTERNAL_PLAINTEXT://1.1.1.65:9094
> >
> listener.security.protocol.map=INTERNAL:SSL,EXTERNAL:SSL,INTERNAL_PLAINTEXT:PLAINTEXT
> > inter.broker.listener.name=INTERNAL_PLAINTEXT
> >
> > default.replication.factor=1
> > offsets.topic.replication.factor=1
> >
> > # Hostname the broker will bind to. If not set, the server will bind to
> all
> > interfaces
> > host.name=10.28.118.172
> >
> > # The number of threads handling network requests
> > num.network.threads=12
> >
> > # The number of threads doing disk I/O
> > num.io.threads=12
> >
> > # The send buffer (SO_SNDBUF) used by the socket server
> > socket.send.buffer.bytes=102400
> >
> > # The receive buffer (SO_RCVBUF) used by the socket server
> > socket.receive.buffer.bytes=102400
> >
> > # The maximum size of a request that the socket server will accept
> > (protection against OOM)
> > socket.request.max.bytes=104857600
> >
> > # Max message size is 10 MB
> > message.max.bytes=1120
> >
> > # Consumer side largest message size is 10 MB
> > fetch.message.max.bytes=1120
> >
> > # Replica max fetch size is 10MB
> > replica.fetch.max.bytes=1120
> >
> > # Max request size 10MB
> > max.request.size=1120
> >
> >  SHUTDOWN and REBALANCING ###
> > # Both the following properties are also enabled by default as well, also
> > explicitly settings here
> > controlled.shutdown.enable=true
> > auto.leader.rebalance.enable=true
> > unclean.leader.election.enable=true
> >
> >
> > # Security Settings ##
> > ssl.endpoint.identification.algorithm=""
> > ssl.keystore.location=/dir/keystore.jks
> > ssl.keystore.password=pwd
> > ssl.key.password=pwd
> > ssl.truststore.location=/dir/truststore.jks
> > ssl.truststore.password=pwd
> > ssl.keystore.type=JKS
> > ssl.truststore.type=JKS
> > security.protocol=SSL
> > ssl.client.auth=required
> > allow.everyone.if.no.acl.found=false
> > authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> > # User.ANONYMOUS is included for AMS to be able to program ACL via 9094
> port
> > super.users=User:CN=KafkaBroker1;User:ANONYMOUS
>
>


Re: Help - Updating SSL Truststore Dynamically - KAFKA-6810

2019-05-16 Thread Darshan
I edited the email subject since it was not correct. Thanks.

On Thu, May 16, 2019 at 2:08 PM Darshan  wrote:

> Hi
>
> I am testing out Kafka 2.2.0 and was hoping to test out "Enable dynamic
> reconfiguration of SSL truststores"
> https://issues.apache.org/jira/browse/KAFKA-6810. But unfortunately I
> could not get it work. Please find the server.properties. Just wondering if
> we need an change of config. Please advise..
>
> 1. I added a new entry in the truststore, and validated it that it is
> present.
> 2. The client (kafka writer) could not write to Kafka due to SSLException.
> 3. I restarted Kafka broker.
> 4. The client could write messages.
>
>
> server.properties
>
> 
>
> # Server Basics #
>
> # The id of the broker. This must be set to a unique integer for each
> broker.
> broker.id=1
> auto.create.topics.enable=true
> delete.topic.enable=true
>
>  Upgrading from 1.1.0 to 2.2.0 
> inter.broker.protocol.version=1.1
> log.message.format.version=1.1
>
> # Socket Server Settings
> #
>
> listeners=INTERNAL://1.1.1.65:9092,EXTERNAL://10.28.118.172:443
> ,INTERNAL_PLAINTEXT://1.1.1.65:9094
> advertised.listeners=INTERNAL://1.1.1.65:9092,EXTERNAL://10.28.118.172:443
> ,INTERNAL_PLAINTEXT://1.1.1.65:9094
>
> listener.security.protocol.map=INTERNAL:SSL,EXTERNAL:SSL,INTERNAL_PLAINTEXT:PLAINTEXT
> inter.broker.listener.name=INTERNAL_PLAINTEXT
>
> default.replication.factor=1
> offsets.topic.replication.factor=1
>
> # Hostname the broker will bind to. If not set, the server will bind to
> all interfaces
> host.name=10.28.118.172
>
> # The number of threads handling network requests
> num.network.threads=12
>
> # The number of threads doing disk I/O
> num.io.threads=12
>
> # The send buffer (SO_SNDBUF) used by the socket server
> socket.send.buffer.bytes=102400
>
> # The receive buffer (SO_RCVBUF) used by the socket server
> socket.receive.buffer.bytes=102400
>
> # The maximum size of a request that the socket server will accept
> (protection against OOM)
> socket.request.max.bytes=104857600
>
> # Max message size is 10 MB
> message.max.bytes=1120
>
> # Consumer side largest message size is 10 MB
> fetch.message.max.bytes=1120
>
> # Replica max fetch size is 10MB
> replica.fetch.max.bytes=1120
>
> # Max request size 10MB
> max.request.size=1120
>
>  SHUTDOWN and REBALANCING ###
> # Both the following properties are also enabled by default as well, also
> explicitly settings here
> controlled.shutdown.enable=true
> auto.leader.rebalance.enable=true
> unclean.leader.election.enable=true
>
>
> # Security Settings ##
> ssl.endpoint.identification.algorithm=""
> ssl.keystore.location=/dir/keystore.jks
> ssl.keystore.password=pwd
> ssl.key.password=pwd
> ssl.truststore.location=/dir/truststore.jks
> ssl.truststore.password=pwd
> ssl.keystore.type=JKS
> ssl.truststore.type=JKS
> security.protocol=SSL
> ssl.client.auth=required
> allow.everyone.if.no.acl.found=false
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> # User.ANONYMOUS is included for AMS to be able to program ACL via 9094
> port
> super.users=User:CN=KafkaBroker1;User:ANONYMOUS
>
>


Help - Updating Keystore Dynamically - KAFKA-6810

2019-05-16 Thread Darshan
Hi

I am testing out Kafka 2.2.0 and was hoping to test out "Enable dynamic
reconfiguration of SSL truststores"
https://issues.apache.org/jira/browse/KAFKA-6810. But unfortunately I could
not get it work. Please find the server.properties. Just wondering if we
need an change of config. Please advise..

1. I added a new entry in the truststore, and validated it that it is
present.
2. The client (kafka writer) could not write to Kafka due to SSLException.
3. I restarted Kafka broker.
4. The client could write messages.


server.properties


# Server Basics #

# The id of the broker. This must be set to a unique integer for each
broker.
broker.id=1
auto.create.topics.enable=true
delete.topic.enable=true

 Upgrading from 1.1.0 to 2.2.0 
inter.broker.protocol.version=1.1
log.message.format.version=1.1

# Socket Server Settings
#

listeners=INTERNAL://1.1.1.65:9092,EXTERNAL://10.28.118.172:443
,INTERNAL_PLAINTEXT://1.1.1.65:9094
advertised.listeners=INTERNAL://1.1.1.65:9092,EXTERNAL://10.28.118.172:443
,INTERNAL_PLAINTEXT://1.1.1.65:9094
listener.security.protocol.map=INTERNAL:SSL,EXTERNAL:SSL,INTERNAL_PLAINTEXT:PLAINTEXT
inter.broker.listener.name=INTERNAL_PLAINTEXT

default.replication.factor=1
offsets.topic.replication.factor=1

# Hostname the broker will bind to. If not set, the server will bind to all
interfaces
host.name=10.28.118.172

# The number of threads handling network requests
num.network.threads=12

# The number of threads doing disk I/O
num.io.threads=12

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept
(protection against OOM)
socket.request.max.bytes=104857600

# Max message size is 10 MB
message.max.bytes=1120

# Consumer side largest message size is 10 MB
fetch.message.max.bytes=1120

# Replica max fetch size is 10MB
replica.fetch.max.bytes=1120

# Max request size 10MB
max.request.size=1120

 SHUTDOWN and REBALANCING ###
# Both the following properties are also enabled by default as well, also
explicitly settings here
controlled.shutdown.enable=true
auto.leader.rebalance.enable=true
unclean.leader.election.enable=true


# Security Settings ##
ssl.endpoint.identification.algorithm=""
ssl.keystore.location=/dir/keystore.jks
ssl.keystore.password=pwd
ssl.key.password=pwd
ssl.truststore.location=/dir/truststore.jks
ssl.truststore.password=pwd
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.protocol=SSL
ssl.client.auth=required
allow.everyone.if.no.acl.found=false
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
# User.ANONYMOUS is included for AMS to be able to program ACL via 9094 port
super.users=User:CN=KafkaBroker1;User:ANONYMOUS


Help needed for Upgrade from 0.10.2 to 1.1

2018-05-09 Thread Darshan
Hi

We were on Kafka 0.10.2.1. While upgrading to 1.1, we bring down all the 3
kafka brokers, and make the change in the config file as shown below which
is recommend in http://kafka.apache.org/11/documentation.html#upgrade and
restart the brokers:

*inter.broker.protocol.version=1.1*
*log.message.format.version=0.10.2*

The symptoms that we are seeing is here:

1. There seems to be leader issue post upgrade. For example, if we describe
the topic, we see content like below where Leader is -1 and Isr is empty.

Topic: topic-5aeea219497d4f408504d1af   Partition: 0Leader: -1
Replicas: 3,2   Isr:
Topic: topic-5aeea219497d4f408504d1af   Partition: 1Leader: 1
 Replicas: 1,3   Isr: 3,1
Topic: topic-5aeea219497d4f408504d1af   Partition: 2Leader: -1
Replicas: 2,1   Isr:
Topic: topic-5aeea219497d4f408504d1af   Partition: 3Leader: 3
 Replicas: 3,1   Isr: 3,1
Topic: topic-5aeea219497d4f408504d1af   Partition: 4Leader: -1
Replicas: 1,2   Isr:
Topic: topic-5aeea219497d4f408504d1af   Partition: 5Leader: -1
Replicas: 2,3   Isr:
Topic: topic-5aeea219497d4f408504d1af   Partition: 6Leader: -1
Replicas: 3,2   Isr:


2. We also saw zookeeper log with messages like there:

2018-05-09_07:05:45.94181 2018-05-09 07:05:45,940 [myid:1] - WARN
[WorkerSender[myid=1]:QuorumCnxManager@400] - Cannot open channel to 3 at
election address kafka-3/1.1.1.144:3888
2018-05-09_07:05:45.94182 java.net.ConnectException: Connection refused


Does anyone know any known caveats or gotchas while upgrading Kafka version
?

Thanks.

--Darshan


Re: KIP-226 - Dynamic Broker Configuration

2018-04-19 Thread Darshan
Hi Rajini

1. Oh so truststores can't be be updated dynamically ? Is it planned for
any future release?

2. By dynamically updated, do you mean that if Broker was using keystore A,
we can now point it to use a different keystore B ?

Thanks.



On Wed, Apr 18, 2018 at 10:51 PM, Darshan <purandare.dars...@gmail.com>
wrote:

> Hi
>
> KIP-226 is released in 1.1. I had a questions about it.
>
> If we add a new certificate (programmatically) in the truststore that
> Kafka Broker is using it, do we need to issue any CLI or other command for
> Kafka broker to read the new certificate or with KIP-226 everything happens
> automatically ?
>
> Thanks.
>
>
>


KIP-226 - Dynamic Broker Configuration

2018-04-18 Thread Darshan
Hi

KIP-226 is released in 1.1. I had a questions about it.

If we add a new certificate (programmatically) in the truststore that Kafka
Broker is using it, do we need to issue any CLI or other command for Kafka
broker to read the new certificate or with KIP-226 everything happens
automatically ?

Thanks.


Re: advertised.listeners

2018-04-05 Thread Darshan
Thanks, Manikumar for pointing the typo. Upon setting the ACL rules are
told by you above, it worked fine. I was able to run secure (on external
interface) and insecure (on internal interface) mode.

As a reference to others in the forum, in addition to the server.properties
that I have posted above, these are the two things I did:

1. Changed super.users to


*super.users=User:CN=Kafka1;UserANONYMOUS2. Used the ACL rule as told by
Manikumar:bin/kafka-acls.sh --authorizer-properties
zookeeper.connect=localhost:2181 --add --allow-principal User:ANONYMOUS
--allow-host \* --operation Read --topic test*
Many thanks Manikumar.

Thanks.


On Wed, Apr 4, 2018 at 7:25 PM, Manikumar <manikumar.re...@gmail.com> wrote:

> Hi,
>
> User name is ANONYMOUS, not CN=ANONYMOUS. You can enable authorizer  logs
> (kafka-authorizer.log) and check for any deny errors.
> supers.users can be configured with same value across all brokers.
>
> sh kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181
> --add --allow-principal User:ANONYMOUS --allow-host \* --operation Read
> --topic test
>
> On Thu, Apr 5, 2018 at 2:39 AM, Darshan <purandare.dars...@gmail.com>
> wrote:
>
>> Hi Manikumar
>>
>> I pushed ACLs for User:ANONYMOUS and when I list them they are listed as
>> shown. Can you please suggest if server.properties needs a change ?
>>
>> *[alpha: user@Kafka1 kafka_2.12-0.10.2.1]$ bin/kafka-acls.sh
>> --authorizer-properties zookeeper.connect=Kafka1-1:2181 --list --topic
>> topic05*
>> *Current ACLs for resource `Topic:topic05`:*
>> *User:CN=ANONYMOUS has Allow permission for operations: Describe
>> from hosts: **
>> *User:CN=Producer05 has Allow permission for operations: Write
>> from hosts: **
>> *User:CN=Producer05 has Allow permission for operations: Describe
>> from hosts: **
>> *User:CN=ANONYMOUS has Allow permission for operations: Write
>> from hosts: **
>>
>> When I start sending messages using kafka console client, I see the
>> following error. topic05 is created with 16 partitions, but no directory is
>> created in logs directory. Can someone please help ?
>>
>> *[alpha: user@Kafka1 kafka_2.12-0.10.2.1]$ bin/kafka-console-producer.sh
>> --broker-list Kafka1:9092 --topic topic05*
>> *WorldHello*
>> *[2018-04-04 21:01:40,338] WARN Error while fetching metadata with
>> correlation id 1 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
>> (org.apache.kafka.clients.NetworkClient)*
>> *[2018-04-04 21:01:40,437] WARN Error while fetching metadata with
>> correlation id 2 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
>> (org.apache.kafka.clients.NetworkClient)*
>> *[2018-04-04 21:01:40,538] WARN Error while fetching metadata with
>> correlation id 3 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
>> (org.apache.kafka.clients.NetworkClient)*
>> *[2018-04-04 21:01:40,640] WARN Error while fetching metadata with
>> correlation id 4 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
>> (org.apache.kafka.clients.NetworkClient)*
>> *[2018-04-04 21:01:40,740] WARN Error while fetching metadata with
>> correlation id 5 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
>> (org.apache.kafka.clients.NetworkClient)*
>> *[2018-04-04 21:01:40,841] WARN Error while fetching metadata with
>> correlation id 6 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
>> (org.apache.kafka.clients.NetworkClient)*
>> *[2018-04-04 21:01:40,943] WARN Error while fetching metadata with
>> correlation id 7 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
>> (org.apache.kafka.clients.NetworkClient)*
>> *[2018-04-04 21:01:41,045] WARN Error while fetching metadata with
>> correlation id 8 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
>> (org.apache.kafka.clients.NetworkClient)*
>>
>>
>>
>>
>>
>>
>>
>> *server.properties for our Kafka-1,2,3. They are identical
>> except broker.id <http://broker.id/> and super.users properties.# ID and
>> basic topic creationbroker.id
>> <http://broker.id/>=1auto.create.topics.enable=truedelete.topic.enable=true#
>> LISTERN Settingslisteners=INTERNAL://1.1.1.165:9092
>> <http://1.1.1.165:9092/>,EXTERNAL://172.21.190.176:9093
>> <http://172.21.190.176:9093/>advertised.listeners=INTERNAL://1.1.1.165:9092
>> <http://1.1.1.165:9092/>,EXTERNAL://172.21.190.176:9093
>> <http://172.21.190.176:9093/>listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:SSLinter.broker.listener.name
>> <http://inter.broker.listener.name/>=INTERNALhost.name
>> <http://host.name/>=172.21.190.176# Security
>> Settingsssl.keystore.location=keystore.jksssl.keystore.password=passwordss

Re: advertised.listeners

2018-04-04 Thread Darshan
Hi Manikumar

I pushed ACLs for User:ANONYMOUS and when I list them they are listed as
shown. Can you please suggest if server.properties needs a change ?

*[alpha: user@Kafka1 kafka_2.12-0.10.2.1]$ bin/kafka-acls.sh
--authorizer-properties zookeeper.connect=Kafka1-1:2181 --list --topic
topic05*
*Current ACLs for resource `Topic:topic05`:*
*User:CN=ANONYMOUS has Allow permission for operations: Describe
from hosts: **
*User:CN=Producer05 has Allow permission for operations: Write from
hosts: **
*User:CN=Producer05 has Allow permission for operations: Describe
from hosts: **
*User:CN=ANONYMOUS has Allow permission for operations: Write from
hosts: **

When I start sending messages using kafka console client, I see the
following error. topic05 is created with 16 partitions, but no directory is
created in logs directory. Can someone please help ?

*[alpha: user@Kafka1 kafka_2.12-0.10.2.1]$ bin/kafka-console-producer.sh
--broker-list Kafka1:9092 --topic topic05*
*WorldHello*
*[2018-04-04 21:01:40,338] WARN Error while fetching metadata with
correlation id 1 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
(org.apache.kafka.clients.NetworkClient)*
*[2018-04-04 21:01:40,437] WARN Error while fetching metadata with
correlation id 2 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
(org.apache.kafka.clients.NetworkClient)*
*[2018-04-04 21:01:40,538] WARN Error while fetching metadata with
correlation id 3 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
(org.apache.kafka.clients.NetworkClient)*
*[2018-04-04 21:01:40,640] WARN Error while fetching metadata with
correlation id 4 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
(org.apache.kafka.clients.NetworkClient)*
*[2018-04-04 21:01:40,740] WARN Error while fetching metadata with
correlation id 5 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
(org.apache.kafka.clients.NetworkClient)*
*[2018-04-04 21:01:40,841] WARN Error while fetching metadata with
correlation id 6 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
(org.apache.kafka.clients.NetworkClient)*
*[2018-04-04 21:01:40,943] WARN Error while fetching metadata with
correlation id 7 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
(org.apache.kafka.clients.NetworkClient)*
*[2018-04-04 21:01:41,045] WARN Error while fetching metadata with
correlation id 8 : {topic05=UNKNOWN_TOPIC_OR_PARTITION}
(org.apache.kafka.clients.NetworkClient)*







*server.properties for our Kafka-1,2,3. They are identical except broker.id
<http://broker.id/> and super.users properties.# ID and basic topic
creationbroker.id
<http://broker.id/>=1auto.create.topics.enable=truedelete.topic.enable=true#
LISTERN Settingslisteners=INTERNAL://1.1.1.165:9092
<http://1.1.1.165:9092/>,EXTERNAL://172.21.190.176:9093
<http://172.21.190.176:9093/>advertised.listeners=INTERNAL://1.1.1.165:9092
<http://1.1.1.165:9092/>,EXTERNAL://172.21.190.176:9093
<http://172.21.190.176:9093/>listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:SSLinter.broker.listener.name
<http://inter.broker.listener.name/>=INTERNALhost.name
<http://host.name/>=172.21.190.176# Security
Settingsssl.keystore.location=keystore.jksssl.keystore.password=passwordssl.key.password=passwordssl.truststore.location=truststore.jksssl.truststore.password=passwordssl.keystore.type=JKSssl.truststore.type=JKSsecurity.protocol=SSLssl.client.auth=requiredallow.everyone.if.no.acl.found=falseauthorizer.class.name
<http://authorizer.class.name/>=kafka.security.auth.SimpleAclAuthorizersuper.users=User:CN=Kafka1*


On Tue, Apr 3, 2018 at 10:42 PM, Manikumar <manikumar.re...@gmail.com>
wrote:

> @Darshan,
> For PLAINTEXT channels, principal will be "ANONYMOUS". You need to give
> produce/consume permissions
>  to "User:ANONYMOUS"
>
>
> On Wed, Apr 4, 2018 at 8:10 AM, Joe Hammerman <
> jhammer...@squarespace.com.invalid> wrote:
>
> > Hi all,
> >
> > Is it possible to run mixed mode with PLAINTEXT and SSL with no SASL?
> What
> > should port and advertised listeners values in kafka-server.properties be
> > set in order to configure such an access profile? We want to so in order
> to
> > be able to perform healthchecking on the loopback device without having
> to
> > negotiate an SSL connection.
> >
> > Thanks in advance for any assistance anyone can provide,
> > Joseph Hammerman
> >
> > On Tue, Apr 3, 2018 at 8:06 PM, Martin Gainty <mgai...@hotmail.com>
> wrote:
> >
> > > (from MessageBroker consumer)
> > >
> > > >tracert MessageProducer
> > >
> > > if zk server is found in tracert
> > >
> > > then yes you have a MB quorom
> > >
> > > FWIR Mixing PLAINTEXT with SASL-SSL on ZK  is not supported
> > > https://stackoverflow.com/questions/46912937/is-it-
> > > possible-to-connect-zookeeper-and-kafka-via-sasl-kafka-
> broke

Re: advertised.listeners

2018-04-03 Thread Darshan
We are using 0.10.2.1 and ZK 3.4.9. Can something be derived from this
piece of info ? Thanks.

On Tue, Apr 3, 2018 at 3:13 PM, Martin Gainty <mgai...@hotmail.com> wrote:

> tracert MessageBrokerIP
>
> do you see ZK server in the trace?
>
> if yes then you are running kafka-cluster
>
> (ZK does not support mixed mode but there is a backdoor
> zookeeper.properties config attribute that allows plaintext clients to
> bypass sasl auth)
>
> ?
>
> Martin
> __
>
>
>
> 
> From: Darshan <purandare.dars...@gmail.com>
> Sent: Tuesday, April 3, 2018 5:45 PM
> To: rajinisiva...@gmail.com
> Cc: users@kafka.apache.org
> Subject: Re: advertised.listeners
>
> Hi Rajini
>
> The above configuration that you mentioned a while back helped me sort the
> issue of listeners and I was also able to run Kafka 0.10.2.1 with SSL and
> ACLs as well from one of your other posts.
>
> I wanted to ask you if it is possible to run Kafka in a mixed security
> mode? i.e. external producers who are on 172.x.x.x interface can use the
> SSL to send/receive data from/to our Kafka brokers, but our internal
> consumer can read/write on PLAINTEXT channel to Kafka.
>
> Here is my server.properties, but my internal producer which does not have
> any keystore and truststore is getting the following error:
> *[2018-04-03 21:21:04,378] WARN Error while fetching metadata with
> correlation id 1 : {Topic4006=TOPIC_AUTHORIZATION_FAILED}
> (org.apache.kafka.clients.NetworkClient)*
>
>
> *server.properties for our Kafka-1,2,3. They are identical except
> broker.id
> <http://broker.id> and super.users properties.*
> broker.id - website is for sale! - Resources and
> Information.<http://broker.id/>
> broker.id
> This website is for sale! broker.id is your first and best source for all
> of the information you’re looking for. From general topics to more of what
> you would expect to find here, broker.id has it all. We hope you find
> what you are searching for!
>
>
>
>
> # ID and basic topic creation
> broker.id=1
> auto.create.topics.enable=true
> delete.topic.enable=true
>
> # LISTERN Settings
> listeners=INTERNAL://1.1.1.165:9092,EXTERNAL://172.21.190.176:9093
> advertised.listeners=INTERNAL://1.1.1.165:9092,EXTERNAL://
> 172.21.190.176:9093
> listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:SSL
> inter.broker.listener.name=INTERNAL
> host.name=172.21.190.176
>
> # Security Settings
> ssl.keystore.location=keystore.jks
> ssl.keystore.password=password
> ssl.key.password=password
> ssl.truststore.location=truststore.jks
> ssl.truststore.password=password
> ssl.keystore.type=JKS
> ssl.truststore.type=JKS
> security.protocol=SSL
> ssl.client.auth=required
> allow.everyone.if.no.acl.found=false
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> super.users=User:CN=Kafka1
>
> Can you please point out if anything needs to be modified ?
>
> Many thanks.
>
> --Darshan
>
> On Wed, May 31, 2017 at 11:31 AM, Rajini Sivaram <rajinisiva...@gmail.com>
> wrote:
>
> > If you want to use different interfaces with the same security protocol,
> > you can specify listener names. You can then also configure different
> > security properties for internal/external if you need.
> >
> > listeners=INTERNAL://1.x.x.x:9092,EXTERNAL://172.x.x.x:9093
> >
> > advertised.listeners=INTERNAL://1.x.x.x:9092,EXTERNAL://172.x.x.x:9093
> >
> > listener.security.protocol.map=INTERNAL:SSL,EXTERNAL:SSL
> >
> > inter.broker.listener.name=INTERNAL
> >
> > On Wed, May 31, 2017 at 6:22 PM, Raghav <raghavas...@gmail.com> wrote:
> >
> > > Hello Darshan
> > >
> > > Have you tried SSL://0.0.0.0:9093 ?
> > >
> > > Rajani had suggested something similar to me a week back while I was
> > > trying to get a ACL based setup.
> > >
> > > Thanks.
> > >
> > > On Wed, May 31, 2017 at 8:58 AM, Darshan <purandare.dars...@gmail.com>
> > > wrote:
> > >
> > >> Hi
> > >>
> > >> Our Kafka broker has two IPs on two different interfaces.
> > >>
> > >> eth0 has 172.x.x.x for external leg
> > >> eth1 has 1.x.x.x for internal leg
> > >>
> > >>
> > >> Kafka Producer is on 172.x.x.x subnet, and Kafka Consumer is on
> 1.x.x.x
> > >> subnet.
> > >>
> > >> If we use advertised.listeners=SSL://172.x.x.x:9093, then Producer
> can
> > >> producer the message, but Consumer cannot receive the message.
> > >>
> > >> What value should we use for advertised.listeners so that Producer can
> > >> write and Consumers can read ?
> > >>
> > >> Thanks.
> > >>
> > >
> > >
> > >
> > > --
> > > Raghav
> > >
> >
>


Re: Running SSL and PLAINTEXT mode together (Kafka 10.2.1)

2018-04-03 Thread Darshan
Hi Jaikiran

My producer is getting *WARN Error while fetching metadata with correlation
id 1 : {Topic4006=TOPIC_AUTHORIZATION_FAILED}
(org.apache.kafka.clients.NetworkClient)* error.

To test it out my producer is the default Kafka console client which I am
trying to use like this: *bin/kafka-console-producer.sh --broker-list
Kafka1:9092 --topic Topic4006* and then I see the above mentioned error
when I type something to send a message.

Here is my server.properties file if that helps.

# ID and basic topic creation
broker.id=1
auto.create.topics.enable=true
delete.topic.enable=true

# LISTERN Settings
listeners=INTERNAL://1.1.1.165:9092,EXTERNAL://172.21.190.176:9093
advertised.listeners=INTERNAL://1.1.1.165:9092,EXTERNAL://17
2.21.190.176:9093
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:SSL
inter.broker.listener.name=INTERNAL
host.name=172.21.190.176

# Security Settings
ssl.keystore.location=keystore.jks
ssl.keystore.password=password
ssl.key.password=password
ssl.truststore.location=truststore.jks
ssl.truststore.password=password
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.protocol=SSL
ssl.client.auth=required
allow.everyone.if.no.acl.found=false
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:CN=Kafka1

Thanks.

On Wed, Dec 20, 2017 at 8:16 PM, Jaikiran Pai <jai.forums2...@gmail.com>
wrote:

> When you say not able to write to a Kafka broker, do you mean your
> producer isn't able to produce a message? What does your producer configs
> look like? What exact exception, error or DEBUG logs do you see when you
> attempt this?
>
> We do use a similar setup, so I do know that such a configuration works
> fine.
>
> -Jaikiran
>
>
>
> On 21/12/17 1:49 AM, Darshan wrote:
>
>> Hi Jaikiran
>>
>> With that config, my internal kafka client can't write to the Kafka
>> broker.
>> What I am looking for is that internal client can write to Kafka topic
>> without having to have any truststore setup, while external kafka client
>> MUST have certificate, and truststore setup and can read only if ACLs are
>> programmed for that topic.
>>
>> Any idea if such a thing exists ?
>>
>> Thanks.
>>
>>
>> On Tue, Dec 19, 2017 at 10:10 PM, Jaikiran Pai <jai.forums2...@gmail.com>
>> wrote:
>>
>> What exact issue are you running into with thta configs?
>>>
>>> -Jaikiran
>>>
>>>
>>>
>>> On 20/12/17 7:24 AM, Darshan wrote:
>>>
>>> Anyone ?
>>>>
>>>> On Mon, Dec 18, 2017 at 7:25 AM, Darshan <purandare.dars...@gmail.com>
>>>> wrote:
>>>>
>>>> Hi
>>>>
>>>>> I am wondering if there is a way to run the SSL and PLAINTEXT mode
>>>>> together ? I am running Kafka 10.2.1. We want our internal clients to
>>>>> use
>>>>> the PLAINTEXT mode to write to certain topics, but any external clients
>>>>> should use SSL to read messages on those topics. We also want to
>>>>> enforce
>>>>> ACLs.
>>>>>
>>>>> To try this out, I modified my server.properties as follows, but
>>>>> without
>>>>> any luck. Can someone please let me know if it needs any change ?
>>>>>
>>>>> listeners=INTERNAL://10.10.10.64:9092,EXTERNAL://172.1.1.157:9093
>>>>> advertised.listeners=INTERNAL://10.10.10.64:9092,EXTERNAL://
>>>>> 172.1.1.157:9093
>>>>> listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:SSL
>>>>> inter.broker.listener.name=INTERNAL
>>>>>
>>>>> ssl.keystore.location=/opt/keystores/keystotr.jks
>>>>> ssl.keystore.password=ABCDEFGH
>>>>> ssl.key.password=ABCDEFGH
>>>>> ssl.truststore.location=/opt/keystores/truststore.jks
>>>>> ssl.truststore.password=ABCDEFGH
>>>>> ssl.keystore.type=JKS
>>>>> ssl.truststore.type=JKS
>>>>> security.protocol=SSL
>>>>> ssl.client.auth=required
>>>>> # allow.everyone.if.no.acl.found=false
>>>>> allow.everyone.if.no.acl.found=true
>>>>> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
>>>>> super.users=User:CN=KafkaBroker01
>>>>>
>>>>> Thanks.
>>>>>
>>>>> --Darshan
>>>>>
>>>>>
>>>>>
>


Re: advertised.listeners

2018-04-03 Thread Darshan
Hi Rajini

The above configuration that you mentioned a while back helped me sort the
issue of listeners and I was also able to run Kafka 0.10.2.1 with SSL and
ACLs as well from one of your other posts.

I wanted to ask you if it is possible to run Kafka in a mixed security
mode? i.e. external producers who are on 172.x.x.x interface can use the
SSL to send/receive data from/to our Kafka brokers, but our internal
consumer can read/write on PLAINTEXT channel to Kafka.

Here is my server.properties, but my internal producer which does not have
any keystore and truststore is getting the following error:
*[2018-04-03 21:21:04,378] WARN Error while fetching metadata with
correlation id 1 : {Topic4006=TOPIC_AUTHORIZATION_FAILED}
(org.apache.kafka.clients.NetworkClient)*


*server.properties for our Kafka-1,2,3. They are identical except broker.id
<http://broker.id> and super.users properties.*

# ID and basic topic creation
broker.id=1
auto.create.topics.enable=true
delete.topic.enable=true

# LISTERN Settings
listeners=INTERNAL://1.1.1.165:9092,EXTERNAL://172.21.190.176:9093
advertised.listeners=INTERNAL://1.1.1.165:9092,EXTERNAL://
172.21.190.176:9093
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:SSL
inter.broker.listener.name=INTERNAL
host.name=172.21.190.176

# Security Settings
ssl.keystore.location=keystore.jks
ssl.keystore.password=password
ssl.key.password=password
ssl.truststore.location=truststore.jks
ssl.truststore.password=password
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.protocol=SSL
ssl.client.auth=required
allow.everyone.if.no.acl.found=false
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:CN=Kafka1

Can you please point out if anything needs to be modified ?

Many thanks.

--Darshan

On Wed, May 31, 2017 at 11:31 AM, Rajini Sivaram <rajinisiva...@gmail.com>
wrote:

> If you want to use different interfaces with the same security protocol,
> you can specify listener names. You can then also configure different
> security properties for internal/external if you need.
>
> listeners=INTERNAL://1.x.x.x:9092,EXTERNAL://172.x.x.x:9093
>
> advertised.listeners=INTERNAL://1.x.x.x:9092,EXTERNAL://172.x.x.x:9093
>
> listener.security.protocol.map=INTERNAL:SSL,EXTERNAL:SSL
>
> inter.broker.listener.name=INTERNAL
>
> On Wed, May 31, 2017 at 6:22 PM, Raghav <raghavas...@gmail.com> wrote:
>
> > Hello Darshan
> >
> > Have you tried SSL://0.0.0.0:9093 ?
> >
> > Rajani had suggested something similar to me a week back while I was
> > trying to get a ACL based setup.
> >
> > Thanks.
> >
> > On Wed, May 31, 2017 at 8:58 AM, Darshan <purandare.dars...@gmail.com>
> > wrote:
> >
> >> Hi
> >>
> >> Our Kafka broker has two IPs on two different interfaces.
> >>
> >> eth0 has 172.x.x.x for external leg
> >> eth1 has 1.x.x.x for internal leg
> >>
> >>
> >> Kafka Producer is on 172.x.x.x subnet, and Kafka Consumer is on 1.x.x.x
> >> subnet.
> >>
> >> If we use advertised.listeners=SSL://172.x.x.x:9093, then Producer can
> >> producer the message, but Consumer cannot receive the message.
> >>
> >> What value should we use for advertised.listeners so that Producer can
> >> write and Consumers can read ?
> >>
> >> Thanks.
> >>
> >
> >
> >
> > --
> > Raghav
> >
>


Re: Running SSL and PLAINTEXT mode together (Kafka 10.2.1)

2017-12-20 Thread Darshan
Hi Jaikiran

With that config, my internal kafka client can't write to the Kafka broker.
What I am looking for is that internal client can write to Kafka topic
without having to have any truststore setup, while external kafka client
MUST have certificate, and truststore setup and can read only if ACLs are
programmed for that topic.

Any idea if such a thing exists ?

Thanks.


On Tue, Dec 19, 2017 at 10:10 PM, Jaikiran Pai <jai.forums2...@gmail.com>
wrote:

> What exact issue are you running into with thta configs?
>
> -Jaikiran
>
>
>
> On 20/12/17 7:24 AM, Darshan wrote:
>
>> Anyone ?
>>
>> On Mon, Dec 18, 2017 at 7:25 AM, Darshan <purandare.dars...@gmail.com>
>> wrote:
>>
>> Hi
>>>
>>> I am wondering if there is a way to run the SSL and PLAINTEXT mode
>>> together ? I am running Kafka 10.2.1. We want our internal clients to use
>>> the PLAINTEXT mode to write to certain topics, but any external clients
>>> should use SSL to read messages on those topics. We also want to enforce
>>> ACLs.
>>>
>>> To try this out, I modified my server.properties as follows, but without
>>> any luck. Can someone please let me know if it needs any change ?
>>>
>>> listeners=INTERNAL://10.10.10.64:9092,EXTERNAL://172.1.1.157:9093
>>> advertised.listeners=INTERNAL://10.10.10.64:9092,EXTERNAL://
>>> 172.1.1.157:9093
>>> listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:SSL
>>> inter.broker.listener.name=INTERNAL
>>>
>>> ssl.keystore.location=/opt/keystores/keystotr.jks
>>> ssl.keystore.password=ABCDEFGH
>>> ssl.key.password=ABCDEFGH
>>> ssl.truststore.location=/opt/keystores/truststore.jks
>>> ssl.truststore.password=ABCDEFGH
>>> ssl.keystore.type=JKS
>>> ssl.truststore.type=JKS
>>> security.protocol=SSL
>>> ssl.client.auth=required
>>> # allow.everyone.if.no.acl.found=false
>>> allow.everyone.if.no.acl.found=true
>>> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
>>> super.users=User:CN=KafkaBroker01
>>>
>>> Thanks.
>>>
>>> --Darshan
>>>
>>>
>


Re: Running SSL and PLAINTEXT mode together (Kafka 10.2.1)

2017-12-19 Thread Darshan
Anyone ?

On Mon, Dec 18, 2017 at 7:25 AM, Darshan <purandare.dars...@gmail.com>
wrote:

> Hi
>
> I am wondering if there is a way to run the SSL and PLAINTEXT mode
> together ? I am running Kafka 10.2.1. We want our internal clients to use
> the PLAINTEXT mode to write to certain topics, but any external clients
> should use SSL to read messages on those topics. We also want to enforce
> ACLs.
>
> To try this out, I modified my server.properties as follows, but without
> any luck. Can someone please let me know if it needs any change ?
>
> listeners=INTERNAL://10.10.10.64:9092,EXTERNAL://172.1.1.157:9093
> advertised.listeners=INTERNAL://10.10.10.64:9092,EXTERNAL://
> 172.1.1.157:9093
> listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:SSL
> inter.broker.listener.name=INTERNAL
>
> ssl.keystore.location=/opt/keystores/keystotr.jks
> ssl.keystore.password=ABCDEFGH
> ssl.key.password=ABCDEFGH
> ssl.truststore.location=/opt/keystores/truststore.jks
> ssl.truststore.password=ABCDEFGH
> ssl.keystore.type=JKS
> ssl.truststore.type=JKS
> security.protocol=SSL
> ssl.client.auth=required
> # allow.everyone.if.no.acl.found=false
> allow.everyone.if.no.acl.found=true
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> super.users=User:CN=KafkaBroker01
>
> Thanks.
>
> --Darshan
>


Running SSL and PLAINTEXT mode together (Kafka 10.2.1)

2017-12-18 Thread Darshan
Hi

I am wondering if there is a way to run the SSL and PLAINTEXT mode together
? I am running Kafka 10.2.1. We want our internal clients to use the
PLAINTEXT mode to write to certain topics, but any external clients should
use SSL to read messages on those topics. We also want to enforce ACLs.

To try this out, I modified my server.properties as follows, but without
any luck. Can someone please let me know if it needs any change ?

listeners=INTERNAL://10.10.10.64:9092,EXTERNAL://172.1.1.157:9093
advertised.listeners=INTERNAL://10.10.10.64:9092,EXTERNAL://172.1.1.157:9093
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:SSL
inter.broker.listener.name=INTERNAL

ssl.keystore.location=/opt/keystores/keystotr.jks
ssl.keystore.password=ABCDEFGH
ssl.key.password=ABCDEFGH
ssl.truststore.location=/opt/keystores/truststore.jks
ssl.truststore.password=ABCDEFGH
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.protocol=SSL
ssl.client.auth=required
# allow.everyone.if.no.acl.found=false
allow.everyone.if.no.acl.found=true
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:CN=KafkaBroker01

Thanks.

--Darshan


Re: advertised.listeners

2017-05-31 Thread Darshan
Thanks, Rajini and Raghav. Let me try this. This is helpful.

On Wed, May 31, 2017 at 11:31 AM, Rajini Sivaram <rajinisiva...@gmail.com>
wrote:

> If you want to use different interfaces with the same security protocol,
> you can specify listener names. You can then also configure different
> security properties for internal/external if you need.
>
> listeners=INTERNAL://1.x.x.x:9092,EXTERNAL://172.x.x.x:9093
>
> advertised.listeners=INTERNAL://1.x.x.x:9092,EXTERNAL://172.x.x.x:9093
>
> listener.security.protocol.map=INTERNAL:SSL,EXTERNAL:SSL
>
> inter.broker.listener.name=INTERNAL
>
> On Wed, May 31, 2017 at 6:22 PM, Raghav <raghavas...@gmail.com> wrote:
>
> > Hello Darshan
> >
> > Have you tried SSL://0.0.0.0:9093 ?
> >
> > Rajani had suggested something similar to me a week back while I was
> > trying to get a ACL based setup.
> >
> > Thanks.
> >
> > On Wed, May 31, 2017 at 8:58 AM, Darshan <purandare.dars...@gmail.com>
> > wrote:
> >
> >> Hi
> >>
> >> Our Kafka broker has two IPs on two different interfaces.
> >>
> >> eth0 has 172.x.x.x for external leg
> >> eth1 has 1.x.x.x for internal leg
> >>
> >>
> >> Kafka Producer is on 172.x.x.x subnet, and Kafka Consumer is on 1.x.x.x
> >> subnet.
> >>
> >> If we use advertised.listeners=SSL://172.x.x.x:9093, then Producer can
> >> producer the message, but Consumer cannot receive the message.
> >>
> >> What value should we use for advertised.listeners so that Producer can
> >> write and Consumers can read ?
> >>
> >> Thanks.
> >>
> >
> >
> >
> > --
> > Raghav
> >
>


advertised.listeners

2017-05-31 Thread Darshan
Hi

Our Kafka broker has two IPs on two different interfaces.

eth0 has 172.x.x.x for external leg
eth1 has 1.x.x.x for internal leg


Kafka Producer is on 172.x.x.x subnet, and Kafka Consumer is on 1.x.x.x
subnet.

If we use advertised.listeners=SSL://172.x.x.x:9093, then Producer can
producer the message, but Consumer cannot receive the message.

What value should we use for advertised.listeners so that Producer can
write and Consumers can read ?

Thanks.


Re: Kafka Authorization and ACLs Broken

2017-05-23 Thread Darshan Purandare
Raghav

I saw few posts of yours around Kafka ACLs and the problems. I have seen
similar issues where Writer has not been able to write to any topic. I have
seen "leader not available" and sometimes "unknown topic or partition", and
"topic_authorization_failed" error.

Let me know if you find a valid config that works.

Thanks.



On Tue, May 23, 2017 at 8:44 AM, Raghav  wrote:

> Hello Kafka Users
>
> I am a new Kafka user and trying to make Kafka SSL work with Authorization
> and ACLs. I followed posts from Kafka and Confluent docs exactly to the
> point but my producer cannot write to kafka broker. I get
> "LEADER_NOT_FOUND" errors. And even Consumer throws the same errors.
>
> Can someone please share their config which worked with ACLs.
>
> Here is my config. Please help.
>
> server.properties config
> 
> 
> broker.id=0
> auto.create.topics.enable=true
> delete.topic.enable=true
>
> listeners=PLAINTEXT://kafka1.example.com:9092
> ,SSL://kafka1.example.com:9093
> 
> host.name=kafka1.example.com 
>
>
> ssl.keystore.location=/var/private/kafka1.keystore.jks
> ssl.keystore.password=12345678
> ssl.key.password=12345678
>
> ssl.truststore.location=/var/private/kafka1.truststore.jks
> ssl.truststore.password=12345678
>
> ssl.client.auth=required
> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
> ssl.keystore.type=JKS
> ssl.truststore.type=JKS
>
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> 
> 
>
>
>
> Here is producer Config(producer.properties)
> 
> 
> security.protocol=SSL
> ssl.truststore.location=/var/private/kafka2.truststore.jks
> ssl.truststore.password=12345678
>
> ssl.keystore.location=/var/private/kafka2.keystore.jks
> ssl.keystore.password=12345678
> ssl.key.password=12345678
>
> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
> ssl.truststore.type=JKS
> ssl.keystore.type=JKS
>
> 
> 
>
>
> Raqhav
>