Re: Kafka ACL's with SSL Protocol is not working

2016-12-16 Thread Derar Alassi
Create proper JKS that has a certificate that is issued by a CA that is
trusted by the Kafka brokers, and you expect a principal with the DN in
your client cert. Spend more time on getting this done correctly and things
will work fine.

On Thu, Dec 15, 2016 at 9:11 PM, Gerard Klijs <ger...@openweb.nl> wrote:

> Most likely something went wrong creating the keystores, causing the SSL
> handshake to fail. Its important to have a valid chain, from the
> certificate in the struststore, and then maybe intermediates tot the
> keystore.
>
> On Fri, Dec 16, 2016, 00:32 Raghu B <raghu98...@gmail.com> wrote:
>
> Thanks Derar & Kiran, your suggestions are very useful.
>
> I enabled Log4J debug mode and found that my client is trying to connect to
> the Kafka server with the *User:ANONYMOUS, *It is really strange.
>
>
> I added a new Super.User with the name *User:ANONYMOUS *then I am able to
> send and receive the messages without any issues.
>
> And now the question is how can I set my username name from Anonymous to
> something like
> *User:"CN=Unknown,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown"*
> which
>
> comes from SSL cert/keystore.
>
> Please help me with your inputs.
>
> Thanks in Advance,
> Raghu
>
> On Thu, Dec 15, 2016 at 5:29 AM, kiran kumar <kiran.cse...@gmail.com>
> wrote:
>
> > I have just noticed that I am using the user which is not configured in
> the
> > kafka server jaas config file..
> >
> >
> >
> > On Thu, Dec 15, 2016 at 6:38 PM, kiran kumar <kiran.cse...@gmail.com>
> > wrote:
> >
> > > Hi Raghu,
> > >
> > > I am also facing the same issue but with the SASL_PLAINTEXT protocol.
> > >
> > > after enabling debugging I see that authentication is being completed.
> I
> > > don't see any debug logs being generated for authorization part (I
> might
> > be
> > > missing something).
> > >
> > > you can also set the log level to debug in properties and see whats
> going
> > > on.
> > >
> > > Thanks,
> > > Kiran
> > >
> > > On Thu, Dec 15, 2016 at 7:09 AM, Derar Alassi <derar.ala...@gmail.com>
> > > wrote:
> > >
> > >> Make sure that the principal ID is exactly what Kafka sees. Guessing
> > what
> > >> the principal ID is by using keytool or openssl is not going to help
> > from
> > >> my experience. The best is to add some logging to output the SSL
> client
> > ID
> > >> in the org.apache.kafka.common.network.SslTransportLayer.
> > peerPrincipal()
> > >> .
> > >> The p.getName() is what you are looking at.
> > >>
> > >> Instead of adding it to the super user list in your server props file,
> > add
> > >> ACLs to that user using the kafka-acls.sh in the bin directory.
> > >>
> > >>
> > >>
> > >> On Wed, Dec 14, 2016 at 3:57 PM, Raghu B <raghu98...@gmail.com>
> wrote:
> > >>
> > >> > Thanks Shrikant for your reply, but I did consumer part also and
> more
> > >> over
> > >> > I am not facing this issue only with consumer, I am getting this
> > errors
> > >> > with producer as well as consumer
> > >> >
> > >> > On Wed, Dec 14, 2016 at 3:53 PM, Shrikant Patel <spa...@pdxinc.com>
> > >> wrote:
> > >> >
> > >> > > You need to execute kafka-acls.sh with --consumer to enable
> > >> consumption
> > >> > > from kafka.
> > >> > >
> > >> > > _
> > >> > > Shrikant Patel  |  817.367.4302 <(817)%20367-4302>
> > >> > > Enterprise Architecture Team
> > >> > > PDX-NHIN
> > >> > >
> > >> > > -Original Message-
> > >> > > From: Raghu B [mailto:raghu98...@gmail.com]
> > >> > > Sent: Wednesday, December 14, 2016 5:42 PM
> > >> > > To: secur...@kafka.apache.org
> > >> > > Subject: Kafka ACL's with SSL Protocol is not working
> > >> > >
> > >> > > Hi All,
> > >> > >
> > >> > > I am trying to enable ACL's in my Kafka cluster with along with
> SSL
> > >> > > Protocol.
> > >> > >
> > >> > > I tried with each and every parameters but no luck, so I need help
> > to
> > >> > > ena

Re: Kafka ACL's with SSL Protocol is not working

2016-12-14 Thread Derar Alassi
Make sure that the principal ID is exactly what Kafka sees. Guessing what
the principal ID is by using keytool or openssl is not going to help from
my experience. The best is to add some logging to output the SSL client ID
in the org.apache.kafka.common.network.SslTransportLayer.peerPrincipal() .
The p.getName() is what you are looking at.

Instead of adding it to the super user list in your server props file, add
ACLs to that user using the kafka-acls.sh in the bin directory.



On Wed, Dec 14, 2016 at 3:57 PM, Raghu B  wrote:

> Thanks Shrikant for your reply, but I did consumer part also and more over
> I am not facing this issue only with consumer, I am getting this errors
> with producer as well as consumer
>
> On Wed, Dec 14, 2016 at 3:53 PM, Shrikant Patel  wrote:
>
> > You need to execute kafka-acls.sh with --consumer to enable consumption
> > from kafka.
> >
> > _
> > Shrikant Patel  |  817.367.4302
> > Enterprise Architecture Team
> > PDX-NHIN
> >
> > -Original Message-
> > From: Raghu B [mailto:raghu98...@gmail.com]
> > Sent: Wednesday, December 14, 2016 5:42 PM
> > To: secur...@kafka.apache.org
> > Subject: Kafka ACL's with SSL Protocol is not working
> >
> > Hi All,
> >
> > I am trying to enable ACL's in my Kafka cluster with along with SSL
> > Protocol.
> >
> > I tried with each and every parameters but no luck, so I need help to
> > enable the SSL(without Kerberos) and I am attaching all the configuration
> > details in this.
> >
> > Kindly Help me.
> >
> >
> > *I tested SSL without ACL, it worked fine
> > (listeners=SSL://10.247.195.122:9093 )*
> >
> >
> > *This is my Kafka server properties file:*
> >
> > *# ACL SETTINGS
> #*
> >
> > *auto.create.topics.enable=true*
> >
> > *authorizer.class.name
> > =kafka.security.auth.SimpleAclAuthorizer*
> >
> > *security.inter.broker.protocol=SSL*
> >
> > *#allow.everyone.if.no.acl.found=true*
> >
> > *#principal.builder.class=CustomizedPrincipalBuilderClass*
> >
> > *#super.users=User:"CN=writeuser,OU=Unknown,O=
> > Unknown,L=Unknown,ST=Unknown,C=Unknown"*
> >
> > *#super.users=User:Raghu;User:Admin*
> >
> > *#offsets.storage=kafka*
> >
> > *#dual.commit.enabled=true*
> >
> > *listeners=SSL://10.247.195.122:9093 *
> >
> > *#listeners=PLAINTEXT://10.247.195.122:9092  >*
> >
> > *#listeners=PLAINTEXT://10.247.195.122:9092
> > ,SSL://10.247.195.122:9093
> > *
> >
> > *#advertised.listeners=PLAINTEXT://10.247.195.122:9092
> > *
> >
> >
> > *
> > ssl.keystore.location=/home/raghu/kafka/security/server.keystore.jks*
> >
> > *ssl.keystore.password=123456*
> >
> > *ssl.key.password=123456*
> >
> > *
> > ssl.truststore.location=/home/raghu/kafka/security/server.
> truststore.jks*
> >
> > *ssl.truststore.password=123456*
> >
> >
> >
> > *Set the ACL from Authorizer CLI:*
> >
> > > *bin/kafka-acls.sh --authorizer-properties
> > zookeeper.connect=10.247.195.122:2181 
> --list
> > --topic ssltopic*
> >
> > *Current ACLs for resource `Topic:ssltopic`: *
> >
> > *  User:CN=writeuser, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown,
> > C=Unknown has Allow permission for operations: Write from hosts: * *
> >
> >
> > *XXXWMXXX-7:kafka_2.11-0.10.1.0 rbaddam$ bin/kafka-console-producer.sh
> > --broker-list 10.247.195.122:9093  --topic
> > ssltopic --producer.config client-ssl.properties*
> >
> >
> > *[2016-12-13 14:53:45,839] WARN Error while fetching metadata with
> > correlation id 0 : {ssltopic=UNKNOWN_TOPIC_OR_PARTITION}
> > (org.apache.kafka.clients.NetworkClient)*
> >
> > *[2016-12-13 14:53:45,984] WARN Error while fetching metadata with
> > correlation id 1 : {ssltopic=UNKNOWN_TOPIC_OR_PARTITION}
> > (org.apache.kafka.clients.NetworkClient)*
> >
> >
> > *XXXWMXXX-7:kafka_2.11-0.10.1.0 rbaddam$ cat client-ssl.properties*
> >
> > *#group.id =sslgroup*
> >
> > *security.protocol=SSL*
> >
> > *ssl.truststore.location=/Users/rbaddam/Desktop/Dev/
> > kafka_2.11-0.10.1.0/ssl/client.truststore.jks*
> >
> > *ssl.truststore.password=123456*
> >
> > * #Configure Below if you use Client Auth*
> >
> >
> > *ssl.keystore.location=/Users/rbaddam/Desktop/Dev/kafka_2.
> > 11-0.10.1.0/ssl/client.keystore.jks*
> >
> > *ssl.keystore.password=123456*
> >
> > *ssl.key.password=123456*
> >
> >
> > *XXXWMXXX-7:kafka_2.11-0.10.1.0 rbaddam$ bin/kafka-console-consumer.sh
> > --bootstrap-server 10.247.195.122:9093 
> > --new-consumer --consumer.config client-ssl.properties --topic ssltopic
> > --from-beginning*
> >
> > *[2016-12-13 14:53:28,817] WARN Error while fetching metadata with
> > correlation id 1 : 

Re: Authorization with Topic Wildcards

2016-09-06 Thread Derar Alassi
Definitely worth putting it there. I will find some time soon to do it.
This is the least I can do!

Thanks guys for the quick feedback.

Derar

On Mon, Sep 5, 2016 at 1:43 PM, Ismael Juma <ism...@juma.me.uk> wrote:

> Hi Derar,
>
> The support for wildcards is limited to `*` at this point. Sorry for the
> confusion. If you're interested to submit a PR to clarify the
> documentation, that would be great. :)
>
> Ismael
>
> On Mon, Sep 5, 2016 at 7:38 PM, Derar Alassi <derar.ala...@gmail.com>
> wrote:
>
> > Hi all,
> >
> > Although the documentation mentions that one can use wildcards with topic
> > ACLs, I couldn't get that to work. Essentially, I want to set an Allow
> > Read/Write ACL on topics com.domain.xyz.* to a certain user. This would
> > give this user Read/Write access to topics com.domain.xyz.abc and
> > com.domain.xyz.def .
> >
> > I set an ACL using this command:
> > ./kafka-acls.sh --authorizer-properties zookeeper.connect= str>
> > --add --allow-principal User:"user01"   --topic com.domain.xyz.* --group
> > group01 --operation read
> >
> > When I try to consume from the topic com.domain.xyz.abc  using the same
> > user ID and group, I get NOT_AUTHORIZED error.
> >
> > Anything I am missing?
> >
> > Thanks,
> > Derar
> >
>


Re: Authorization with Topic Wildcards

2016-09-05 Thread Derar Alassi
Yes, I am running it from the command line. Zookeeper has *com.domain.xyz.**
under /kafka-acl node. So it looks like it's being added correctly. I
actually allowed some time for ACL propagation to the Kafka brokers.



On Mon, Sep 5, 2016 at 11:42 AM, Tom Crayford <tcrayf...@heroku.com> wrote:

> if you're running that at a bash or similar shell, you need to quote the
> "*" so that bash doesn't expand it as a glob:
>
> ./kafka-acls.sh --authorizer-properties zookeeper.connect=
> --add --allow-principal User:"user01"   --topic 'com.domain.xyz.*' --group
> group01 --operation read
>
> It may be instructive to look at what data is in zookeeper for the acls to
> debug this.
>
> On Mon, Sep 5, 2016 at 7:38 PM, Derar Alassi <derar.ala...@gmail.com>
> wrote:
>
> > Hi all,
> >
> > Although the documentation mentions that one can use wildcards with topic
> > ACLs, I couldn't get that to work. Essentially, I want to set an Allow
> > Read/Write ACL on topics com.domain.xyz.* to a certain user. This would
> > give this user Read/Write access to topics com.domain.xyz.abc and
> > com.domain.xyz.def .
> >
> > I set an ACL using this command:
> > ./kafka-acls.sh --authorizer-properties zookeeper.connect= str>
> > --add --allow-principal User:"user01"   --topic com.domain.xyz.* --group
> > group01 --operation read
> >
> > When I try to consume from the topic com.domain.xyz.abc  using the same
> > user ID and group, I get NOT_AUTHORIZED error.
> >
> > Anything I am missing?
> >
> > Thanks,
> > Derar
> >
>


Authorization with Topic Wildcards

2016-09-05 Thread Derar Alassi
Hi all,

Although the documentation mentions that one can use wildcards with topic
ACLs, I couldn't get that to work. Essentially, I want to set an Allow
Read/Write ACL on topics com.domain.xyz.* to a certain user. This would
give this user Read/Write access to topics com.domain.xyz.abc and
com.domain.xyz.def .

I set an ACL using this command:
./kafka-acls.sh --authorizer-properties zookeeper.connect=
--add --allow-principal User:"user01"   --topic com.domain.xyz.* --group
group01 --operation read

When I try to consume from the topic com.domain.xyz.abc  using the same
user ID and group, I get NOT_AUTHORIZED error.

Anything I am missing?

Thanks,
Derar


Re: Kafka ACLs CLI Auth Error

2016-08-11 Thread Derar Alassi
Just for the record. The Kafka/ZK clusters were in a bad state that caused
this issue. I nuked the data dirs both ZK and Kafka and things work fine.
Unfortunately, I couldn't reproduce the error.

On Mon, Aug 8, 2016 at 5:10 PM, BigData dev <bigdatadev...@gmail.com> wrote:

> Hi,
> I think jaas config file need to be changed.
>
> Client {
>com.sun.security.auth.module.Krb5LoginModule required
>useKeyTab=true
>keyTab="/etc/security/keytabs/kafka.service.keytab"
>storeKey=true
>useTicketCache=false
>serviceName="zookeeper"
>principal="kafka/hostname.abc@abc.com";
> };
>
>
> You can follow the blog which provides complete steps for Kafka ACLS
>
> https://developer.ibm.com/hadoop/2016/07/20/kafka-acls/
>
>
>
> Thanks,
>
> Bharat
>
>
>
>
> On Mon, Aug 8, 2016 at 2:08 PM, Derar Alassi <derar.ala...@gmail.com>
> wrote:
>
> > Hi all,
> >
> > I have  3-node ZK and Kafka clusters. I have secured ZK with SASL. I got
> > the keytabs done for my brokers and they can connect to the ZK ensemble
> > just fine with no issues. All gravy!
> >
> > Now, I am trying to set ACLs using the kafka-acls.sh CLI. Before that, I
> > did export the KAFKA_OPTS using the following command:
> >
> >
> >  export  KAFKA_OPTS="-Djava.security.auth.login.config=/
> > kafka_server_jaas.conf
> > -Djavax.net.debug=all -Dsun.security.krb5.debug=true
> -Djavax.net.debug=all
> > -Dsun.security.krb5.debug=true -Djava.security.krb5.conf= > conf>/krb5.conf"
> >
> > I enabled extra debugging too. The JAAS file has the following info:
> >
> > KafkaServer {
> > com.sun.security.auth.module.Krb5LoginModule required
> > useKeyTab=true
> > storeKey=true
> > keyTab="/etc/+kafka.keytab"
> > principal="kafka/@MY_DOMAIN";
> > };
> > Client {
> > com.sun.security.auth.module.Krb5LoginModule required
> > useKeyTab=true
> > useTicketCache=true
> > storeKey=true
> > keyTab="/etc/+kafka.keytab"
> > principal="kafka/@MY_DOMAIN";
> > };
> >
> > Note that I enabled useTicketCache in the client section.
> >
> > I know that my krb5.conf file is good since the brokers are healthy and
> > consumer/producers are able to do their work.
> >
> > Two scenarios:
> >
> > 1. When I enabled the useTicketCache=true, I get the following error:
> >
> > *Aug 08, 2016 8:42:46 PM org.apache.zookeeper.ClientCnxn$SendThread
> > startConnectWARNING: SASL configuration failed:
> > javax.security.auth.login.LoginException: No key to store Will continue
> > connection to Zookeeper server without SASL authentication, if Zookeeper
> > server allows it.*
> >
> > I execute "kinit kafka/@ -k -t
> > /etc/+kafka.keytab " on the same shell where I run the .sh CLI
> > tool.
> > 2. When I remove userTicketCache, I get the following error:
> >
> >
> >
> >
> >
> >
> >
> >
> > *Aug 08, 2016 9:03:38 PM org.apache.zookeeper.ZooKeeper closeINFO:
> Session:
> > 0x356621f18f70009 closedError while executing ACL command:
> > org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode =
> > NoAuth for /kafka-acl/TopicAug 08, 2016 9:03:38 PM
> > org.apache.zookeeper.ClientCnxn$EventThread runINFO: EventThread shut
> > downorg.I0Itec.zkclient.exception.ZkException:
> > org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode =
> > NoAuth for /kafka-acl/Topicat
> > org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:68)*
> >
> >
> > Here is the command I run to set the ACLs in all cases:
> > ./bin/kafka-acls.sh --authorizer-properties zookeeper.connect=:
> > 2181
> > --add --allow-principal User:Bob --producer --topic ssl-topic
> >
> >
> > I use Kafka 0.9.0.1. Note that I am using the same keytabs that my
> Brokers
> > (Kafka services) are using.
> >
> >
> > Any ideas what I am doing wrong or what I should do differently to get
> ACLs
> > set?
> >
> > Thanks,
> > Derar
> >
>


Re: How to Identify Consumers of a Topic?

2016-08-08 Thread Derar Alassi
I use kafka-consumer-offset-checker.sh to check offsets of consumers and
along that you get which consumer is attached to each partition.

On Mon, Aug 8, 2016 at 3:12 PM, Jillian Cocklin <
jillian.cock...@danalinc.com> wrote:

> Hello,
>
> Our team is using Kafka for the first time and are in the testing phase of
> getting a new product ready, which uses Kafka as the communications
> backbone.  Basically, a processing unit will consume a message from a
> topic, do the processing, then produce the output to another topic.
> Messages get passed back and forth between processors until done.
>
> We had an issue last week where an outdated processor was "stealing"
> messages from a topic, doing incorrect (outdated) processing, and putting
> it in the next topic.  We could not find the rogue processor (aka
> consumer).  We shut down all known consumers of that topic, and it was
> still happening.  We finally gave up and renamed the topic to get around
> the issue.
>
> Is there a Kafka tool we could have used to find the connected consumer in
> that consumer group?  Maybe by name or by IP?
>
> Thanks,
> Jillian
>
>


Kafka ACLs CLI Auth Error

2016-08-08 Thread Derar Alassi
Hi all,

I have  3-node ZK and Kafka clusters. I have secured ZK with SASL. I got
the keytabs done for my brokers and they can connect to the ZK ensemble
just fine with no issues. All gravy!

Now, I am trying to set ACLs using the kafka-acls.sh CLI. Before that, I
did export the KAFKA_OPTS using the following command:


 export  
KAFKA_OPTS="-Djava.security.auth.login.config=/kafka_server_jaas.conf
-Djavax.net.debug=all -Dsun.security.krb5.debug=true -Djavax.net.debug=all
-Dsun.security.krb5.debug=true -Djava.security.krb5.conf=/krb5.conf"

I enabled extra debugging too. The JAAS file has the following info:

KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/+kafka.keytab"
principal="kafka/@MY_DOMAIN";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
useTicketCache=true
storeKey=true
keyTab="/etc/+kafka.keytab"
principal="kafka/@MY_DOMAIN";
};

Note that I enabled useTicketCache in the client section.

I know that my krb5.conf file is good since the brokers are healthy and
consumer/producers are able to do their work.

Two scenarios:

1. When I enabled the useTicketCache=true, I get the following error:

*Aug 08, 2016 8:42:46 PM org.apache.zookeeper.ClientCnxn$SendThread
startConnectWARNING: SASL configuration failed:
javax.security.auth.login.LoginException: No key to store Will continue
connection to Zookeeper server without SASL authentication, if Zookeeper
server allows it.*

I execute "kinit kafka/@ -k -t
/etc/+kafka.keytab " on the same shell where I run the .sh CLI
tool.
2. When I remove userTicketCache, I get the following error:








*Aug 08, 2016 9:03:38 PM org.apache.zookeeper.ZooKeeper closeINFO: Session:
0x356621f18f70009 closedError while executing ACL command:
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode =
NoAuth for /kafka-acl/TopicAug 08, 2016 9:03:38 PM
org.apache.zookeeper.ClientCnxn$EventThread runINFO: EventThread shut
downorg.I0Itec.zkclient.exception.ZkException:
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode =
NoAuth for /kafka-acl/Topicat
org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:68)*


Here is the command I run to set the ACLs in all cases:
./bin/kafka-acls.sh --authorizer-properties zookeeper.connect=:2181
--add --allow-principal User:Bob --producer --topic ssl-topic


I use Kafka 0.9.0.1. Note that I am using the same keytabs that my Brokers
(Kafka services) are using.


Any ideas what I am doing wrong or what I should do differently to get ACLs
set?

Thanks,
Derar