Securing Multi-Node single broker kafka instance

2017-03-01 Thread IT Consultant
Hi Team ,

Can you please help me understand ,

1. How can i secure multi-node (3 machine) single broker (1 broker ) Apache
Kafka deployment secure using SSL ?

i tried to follow instructions here but found pretty confusing .

https://www.confluent.io/blog/apache-kafka-security-authoriz
ation-authentication-encryption/

http://docs.confluent.io/2.0.0/kafka/security.html

Currently , i have kafka running on 3 different machines .
2. How do i make them talk to each other over SSL ?
3. How do i make zookeeper talk to each other and brokers?

Requesting your help .

Thanks in advance.


Re: Securing Multi-Node single broker kafka instance

2017-03-01 Thread IT Consultant
Hi Harsha ,

Thanks a lot .

Let me explain where am i stuck ,

i have three machines on which i am running apache kafka with single broker
but zookeeper of each machine is configured with other machine.

Example : node1=zk1,zk2,zk3
node2=zk1,zk2,zk3
node3=zk1,zk2,zk3

This is done for HA .

Now i need to secure this deployment using SSL .

*Things tried so far :*

Create a key and certificate for each of these nodes and configure broker
according to the documentation .

However , i see following error when i run console producer and consumer
with client certificate or client properties file .

WARN Error while fetching metadata for topic


How do i make each broker work with other broker ?
How do i generate and store certificate for this ? because online document
seems to be confusing for me.
How do i make zookeepers sync with each other and behave as earlier ?



On Thu, Mar 2, 2017 at 2:25 AM, Harsha Chintalapani  wrote:

> For inter broker communication over SSL all you need is to add
> security.inter.broker.protocol to SSL.
> "How do i make zookeeper talk to each other and brokers?"
> Not sure I understand the question. You need to make sure zookeeper hosts
> and port are reachable from your broker nodes.
> -Harsha
>
> On Wed, Mar 1, 2017 at 12:45 PM IT Consultant <0binarybudd...@gmail.com>
> wrote:
>
> > Hi Team ,
> >
> > Can you please help me understand ,
> >
> > 1. How can i secure multi-node (3 machine) single broker (1 broker )
> Apache
> > Kafka deployment secure using SSL ?
> >
> > i tried to follow instructions here but found pretty confusing .
> >
> > https://www.confluent.io/blog/apache-kafka-security-authoriz
> > ation-authentication-encryption/
> >
> > http://docs.confluent.io/2.0.0/kafka/security.html
> >
> > Currently , i have kafka running on 3 different machines .
> > 2. How do i make them talk to each other over SSL ?
> > 3. How do i make zookeeper talk to each other and brokers?
> >
> > Requesting your help .
> >
> > Thanks in advance.
> >
>


Re: Securing Multi-Node single broker kafka instance

2017-03-01 Thread IT Consultant
Sure Harsha . I shall follow recommended method .

However , i would like to add to the discussion that current deployment
worked just fine .

People were using it for quite sometime with no security .

Do i need to create topics and all again if am enabling security ?

On Thu, Mar 2, 2017 at 3:03 AM, Harsha  wrote:

> Here is the recommended way to setup a 3-node Kafka cluster. Its always
> recommended to keep zookeeper nodes on different set of nodes than the one
> you are running Kafka. To go with your current 3-node installation.
> 1. Install 3-node zookeeper make sure they are forming the quorum (
> https://zookeeper.apache.org/doc/r3.3.2/zookeeperAdmin.html)
> 2. Install apache kafka binaries on all 3 nodes.
> 3. Make sure you keep the same zookeeper.connect in server.properties on
> all 3 nodes for your kafka broker.
> 4. Start Kafka brokers
> 5. For sanity check, make sure you create a topic with 3-replication
> factor and see if you can produce & consume messages
>
> Before stepping into security make sure your non-secure Kafka cluster
> works ok. Once you’ve a stable & working cluster
> follow instructions in the doc to enable SSL.
>
> -Harsha
>
> On Mar 1, 2017, 1:08 PM -0800, IT Consultant <0binarybudd...@gmail.com>,
> wrote:
> > Hi Harsha ,
> >
> > Thanks a lot .
> >
> > Let me explain where am i stuck ,
> >
> > i have three machines on which i am running apache kafka with single
> broker
> > but zookeeper of each machine is configured with other machine.
> >
> > Example : node1=zk1,zk2,zk3
> > node2=zk1,zk2,zk3
> > node3=zk1,zk2,zk3
> >
> > This is done for HA .
> >
> > Now i need to secure this deployment using SSL .
> >
> > *Things tried so far :*
> >
> > Create a key and certificate for each of these nodes and configure broker
> > according to the documentation .
> >
> > However , i see following error when i run console producer and consumer
> > with client certificate or client properties file .
> >
> > WARN Error while fetching metadata for topic
> >
> >
> > How do i make each broker work with other broker ?
> > How do i generate and store certificate for this ? because online
> document
> > seems to be confusing for me.
> > How do i make zookeepers sync with each other and behave as earlier ?
> >
> >
> >
> > On Thu, Mar 2, 2017 at 2:25 AM, Harsha Chintalapani 
> wrote:
> >
> > > For inter broker communication over SSL all you need is to add
> > > security.inter.broker.protocol to SSL.
> > > "How do i make zookeeper talk to each other and brokers?"
> > > Not sure I understand the question. You need to make sure zookeeper
> hosts
> > > and port are reachable from your broker nodes.
> > > -Harsha
> > >
> > > On Wed, Mar 1, 2017 at 12:45 PM IT Consultant <
> 0binarybudd...@gmail.com
> > > wrote:
> > >
> > > > Hi Team ,
> > > >
> > > > Can you please help me understand ,
> > > >
> > > > 1. How can i secure multi-node (3 machine) single broker (1 broker )
> > > Apache
> > > > Kafka deployment secure using SSL ?
> > > >
> > > > i tried to follow instructions here but found pretty confusing .
> > > >
> > > > https://www.confluent.io/blog/apache-kafka-security-authoriz
> > > > ation-authentication-encryption/
> > > >
> > > > http://docs.confluent.io/2.0.0/kafka/security.html
> > > >
> > > > Currently , i have kafka running on 3 different machines .
> > > > 2. How do i make them talk to each other over SSL ?
> > > > 3. How do i make zookeeper talk to each other and brokers?
> > > >
> > > > Requesting your help .
> > > >
> > > > Thanks in advance.
> > > >
> > >
>


Re: Securing Multi-Node single broker kafka instance

2017-03-01 Thread IT Consultant
Hi Harsha ,

Just looked at the URL you shared .

I have ensured that zookeeper.properties file is same across all nodes .
Just like it's shown here .
As i stated earlier , its working well for quite sometime .

tickTime=2000
dataDir=/var/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=zoo1:2888:3888
server.2=zoo2:2888:3888
server.3=zoo3:2888:3888

Generation of a key and certificate is enough or should i do anything
on zookeeper front to make it work with

kafka brokers ?


Am i missing anything here?


On Thu, Mar 2, 2017 at 3:08 AM, IT Consultant <0binarybudd...@gmail.com>
wrote:

> Sure Harsha . I shall follow recommended method .
>
> However , i would like to add to the discussion that current deployment
> worked just fine .
>
> People were using it for quite sometime with no security .
>
> Do i need to create topics and all again if am enabling security ?
>
> On Thu, Mar 2, 2017 at 3:03 AM, Harsha  wrote:
>
>> Here is the recommended way to setup a 3-node Kafka cluster. Its always
>> recommended to keep zookeeper nodes on different set of nodes than the one
>> you are running Kafka. To go with your current 3-node installation.
>> 1. Install 3-node zookeeper make sure they are forming the quorum (
>> https://zookeeper.apache.org/doc/r3.3.2/zookeeperAdmin.html)
>> 2. Install apache kafka binaries on all 3 nodes.
>> 3. Make sure you keep the same zookeeper.connect in server.properties on
>> all 3 nodes for your kafka broker.
>> 4. Start Kafka brokers
>> 5. For sanity check, make sure you create a topic with 3-replication
>> factor and see if you can produce & consume messages
>>
>> Before stepping into security make sure your non-secure Kafka cluster
>> works ok. Once you’ve a stable & working cluster
>> follow instructions in the doc to enable SSL.
>>
>> -Harsha
>>
>> On Mar 1, 2017, 1:08 PM -0800, IT Consultant <0binarybudd...@gmail.com>,
>> wrote:
>> > Hi Harsha ,
>> >
>> > Thanks a lot .
>> >
>> > Let me explain where am i stuck ,
>> >
>> > i have three machines on which i am running apache kafka with single
>> broker
>> > but zookeeper of each machine is configured with other machine.
>> >
>> > Example : node1=zk1,zk2,zk3
>> > node2=zk1,zk2,zk3
>> > node3=zk1,zk2,zk3
>> >
>> > This is done for HA .
>> >
>> > Now i need to secure this deployment using SSL .
>> >
>> > *Things tried so far :*
>> >
>> > Create a key and certificate for each of these nodes and configure
>> broker
>> > according to the documentation .
>> >
>> > However , i see following error when i run console producer and consumer
>> > with client certificate or client properties file .
>> >
>> > WARN Error while fetching metadata for topic
>> >
>> >
>> > How do i make each broker work with other broker ?
>> > How do i generate and store certificate for this ? because online
>> document
>> > seems to be confusing for me.
>> > How do i make zookeepers sync with each other and behave as earlier ?
>> >
>> >
>> >
>> > On Thu, Mar 2, 2017 at 2:25 AM, Harsha Chintalapani 
>> wrote:
>> >
>> > > For inter broker communication over SSL all you need is to add
>> > > security.inter.broker.protocol to SSL.
>> > > "How do i make zookeeper talk to each other and brokers?"
>> > > Not sure I understand the question. You need to make sure zookeeper
>> hosts
>> > > and port are reachable from your broker nodes.
>> > > -Harsha
>> > >
>> > > On Wed, Mar 1, 2017 at 12:45 PM IT Consultant <
>> 0binarybudd...@gmail.com
>> > > wrote:
>> > >
>> > > > Hi Team ,
>> > > >
>> > > > Can you please help me understand ,
>> > > >
>> > > > 1. How can i secure multi-node (3 machine) single broker (1 broker )
>> > > Apache
>> > > > Kafka deployment secure using SSL ?
>> > > >
>> > > > i tried to follow instructions here but found pretty confusing .
>> > > >
>> > > > https://www.confluent.io/blog/apache-kafka-security-authoriz
>> > > > ation-authentication-encryption/
>> > > >
>> > > > http://docs.confluent.io/2.0.0/kafka/security.html
>> > > >
>> > > > Currently , i have kafka running on 3 different machines .
>> > > > 2. How do i make them talk to each other over SSL ?
>> > > > 3. How do i make zookeeper talk to each other and brokers?
>> > > >
>> > > > Requesting your help .
>> > > >
>> > > > Thanks in advance.
>> > > >
>> > >
>>
>
>


Re: Performance and encryption

2017-03-06 Thread IT Consultant
Hi Todd

Can you please help me with notes or document on how did you achieve
encryption ?

I have followed data available on official sites but failed as I m no good
with TLS .

On Mar 6, 2017 19:55, "Todd Palino"  wrote:

> It’s not that Kafka has to decode it, it’s that it has to send it across
> the network. This is specific to enabling TLS support (transport
> encryption), and won’t affect any end-to-end encryption you do at the
> client level.
>
> The operation in question is called “zero copy”. In order to send a message
> batch to a consumer, the Kafka broker must read it from disk (sometimes
> it’s cached in memory, but that’s irrelevant here) and send it across the
> network. The Linux kernel allows this to happen without having to copy the
> data in memory (to move it from the disk buffers to the network buffers).
> However, if TLS is enabled, the broker must first encrypt the data going
> across the network. This means that it can no longer take advantage of the
> zero copy optimization as it has to make a copy in the process of applying
> the TLS encryption.
>
> Now, how much of an impact this has on the broker operations is up for
> debate, I think. Originally, when we ran into this problem was when TLS
> support was added to Kafka and the zero copy send for plaintext
> communications was accidentally removed as well. At the time, we saw a
> significant performance hit, and the code was patched to put it back.
> However, since then I’ve turned on inter-broker TLS in all of our clusters,
> and when we did that there was no performance hit. This is odd, because the
> replica fetchers should take advantage of the same zero copy optimization.
>
> It’s possible that it’s because it’s just one consumer (the replica
> fetchers). We’re about to start testing additional consumers over TLS, so
> we’ll see what happens at that point. All I can suggest right now is that
> you test in your environment and see what the impact is. Oh, and using
> message keys (or not) won’t matter here.
>
> -Todd
>
>
> On Mon, Mar 6, 2017 at 5:38 AM, Nicolas Motte 
> wrote:
>
> > Hi everyone,
> >
> > I understand one of the reasons why Kafka is performant is by using
> > zero-copy.
> >
> > I often hear that when encryption is enabled, then Kafka has to copy the
> > data in user space to decode the message, so it has a big impact on
> > performance.
> >
> > If it is true, I don t get why the message has to be decoded by Kafka. I
> > would assume that whether the message is encrypted or not, Kafka simply
> > receives it, appends it to the file, and when a consumer wants to read
> it,
> > it simply reads at the right offset...
> >
> > Also I m wondering if it s the case if we don t use keys (pure queuing
> > system with key=null).
> >
> > Cheers
> > Nico
> >
>
>
>
> --
> *Todd Palino*
> Staff Site Reliability Engineer
> Data Infrastructure Streaming
>
>
>
> linkedin.com/in/toddpalino
>


Kafka security

2017-04-11 Thread IT Consultant
Hi All

How can I avoid using password for keystore creation ?

Our corporate policies doesn't​allow us to hardcore password. We are
currently passing keystore password while accessing TLS enabled Kafka
instance .

I would like to use either passwordless keystore or avoid password for
cleint accessing Kafka .


Please help


Re: Kafka security

2017-04-11 Thread IT Consultant
Thanks for your response .

We aren't allowed to hard code  password in any of our program

On Apr 11, 2017 23:39, "Mar Ian"  wrote:

> Since is a java property you could set the property (keystore password)
> programmatically,
>
> before you connect to kafka (ie, before creating a consumer or producer)
>
> System.setProperty("zookeeper.ssl.keyStore.password", password);
>
> martin
>
> 
> From: IT Consultant <0binarybudd...@gmail.com>
> Sent: April 11, 2017 2:01 PM
> To: users@kafka.apache.org
> Subject: Kafka security
>
> Hi All
>
> How can I avoid using password for keystore creation ?
>
> Our corporate policies doesn'tallow us to hardcore password. We are
> currently passing keystore password while accessing TLS enabled Kafka
> instance .
>
> I would like to use either passwordless keystore or avoid password for
> cleint accessing Kafka .
>
>
> Please help
>


Apache Kafka SSL Deployment

2017-04-12 Thread IT Consultant
Hi All ,



How can I avoid using password for keystore creation ?

We are currently passing keystore password while accessing TLS enabled
Kafka instance .

I would like to use either passwordless keystore or avoid password for
clients accessing Kafka .


Failed to update metadata after 5000 ms

2017-05-10 Thread IT Consultant
Hi All,

Currently , i am running TLS enabled multi-node kafka .

*Version :* 2.11-0.10.1.1

*Scenario :* Whenever producer tries to produce around 10 records at
once to kafka . *It gets failed to update metadata after 5000 ms error .*

*Server.properties :*

*[image: Inline image 1]*

*Can you guys please help me fix it asap . It's affecting prod instance. *

*Thanks a lot*


Kafka Over TLS Error - Failed to send SSL Close message - Broken Pipe

2017-06-02 Thread IT Consultant
Hi All,

I have been seeing below error since three days ,

Can you please help me understand more about this ,


WARN Failed to send SSL Close message
(org.apache.kafka.common.network.SslTransportLayer)
java.io.IOException: Broken pipe
 at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
 at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
 at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
 at sun.nio.ch.IOUtil.write(IOUtil.java:65)
 at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
 at
org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:194)
 at
org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:148)
 at
org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:45)
 at
org.apache.kafka.common.network.Selector.close(Selector.java:442)
 at org.apache.kafka.common.network.Selector.poll(Selector.java:310)
 at
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
 at
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
 at
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
 at java.lang.Thread.run(Thread.java:745)


Thanks  a lot.


Re: Kafka Over TLS Error - Failed to send SSL Close message - Broken Pipe

2017-06-07 Thread IT Consultant
Hi All ,

Thanks a lot for your help .

A bug has been logged for said issue and can be found at ,

https://issues.apache.org/jira/browse/KAFKA-5401


Thanks again .

On Sun, Jun 4, 2017 at 6:38 PM, Martin Gainty  wrote:

>
> 
> From: IT Consultant <0binarybudd...@gmail.com>
> Sent: Friday, June 2, 2017 11:02 AM
> To: users@kafka.apache.org
> Subject: Kafka Over TLS Error - Failed to send SSL Close message - Broken
> Pipe
>
> Hi All,
>
> I have been seeing below error since three days ,
>
> Can you please help me understand more about this ,
>
>
> WARN Failed to send SSL Close message
> (org.apache.kafka.common.network.SslTransportLayer)
> java.io.IOException: Broken pipe
>  at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>  at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>  at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>  at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>  at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
>  at
> org.apache.kafka.common.network.SslTransportLayer.
> flush(SslTransportLayer.java:194)
>
> MG>Here is org.apache.kafka.common.network.SslTransportLayer code:
> /**
> * Flushes the buffer to the network, non blocking
> * @param buf ByteBuffer
> * @return boolean true if the buffer has been emptied out, false
> otherwise
> * @throws IOException
> */
> private boolean flush(ByteBuffer buf) throws IOException {
> int remaining = buf.remaining();
> if (remaining > 0) {
> int written = socketChannel.write(buf); //no check for
> isOpen() *socketChannel.isOpen()*
> return written >= remaining;
> }
> return true;
> }
>
> MG>it appears upstream monitor *container* closed connection but kafka
> socketChannel never tested (now-closed) connection with isOpen()
> MG>i think you found a bug
> MG>can you file bug in kafka-jira ?
> https://issues.apache.org/jira/browse/KAFKA/?selectedTab=com.atlassian.
> jira.jira-projects-plugin:summary-panel
> Kafka - ASF JIRA - issues.apache.org issues.apache.org/jira/browse/KAFKA/?selectedTab=com.
> atlassian.jira.jira-projects-plugin:summary-panel>
> issues.apache.org
> Atlassian JIRA Project Management Software (v6.3.15#6346-sha1:dbc023d)
> About JIRA; Report a problem; Powered by a free Atlassian JIRA open source
> license for Apache ...
>
>
>
>
>  at
> org.apache.kafka.common.network.SslTransportLayer.
> close(SslTransportLayer.java:148)
>  at
> org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:45)
>  at
> org.apache.kafka.common.network.Selector.close(Selector.java:442)
>  at org.apache.kafka.common.network.Selector.poll(
> Selector.java:310)
>  at
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
>  at
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
>  at
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
>  at java.lang.Thread.run(Thread.java:745)
>
>
> Thanks  a lot.
>


Parsing Kafka logs using Logstash

2017-06-08 Thread IT Consultant
Hi All,

Has anybody tried to parse Kafka logs using Logstash ?

If yes , can you please share patterns used to parse .

Thanks in advance.


KStream Usage spikes memory consumption and breaks Kafka

2017-06-20 Thread IT Consultant
Hi All ,

Kafka instance is breaking down when used Kstream . It runs out of memory
frequently resulting into service unavailabilty ,


Is it a good practice to use Kstream ?
What other option must be tried to avoid such breakage ?
If it's best pratice , how do we fine tune kafka to withhold load ?

Thanks for your help in advance .


Multi-node deployment

2017-08-21 Thread IT Consultant
HI All ,

We are seeing following behavior , let me know if its expected or some
configuration error .

I have Apache Kafka running on three server on TLS protocol . They are
clustered on ZK level .

*Behaviour *,

1.* Unable to run only one instance* - When 2 out of 3 servers or instances
goes down even the third one dies trying to connect to other two ZK hosts .

2. *Currently Replication Factor is set as 2* - Producer and consumer stops
working when one out 3 server goes down .


Thanks in advance


Independent zookeep for kafka multi node

2017-08-23 Thread IT Consultant
Hi All

Can I have three independent zookeepers tagged to three kafka brokers
without any clustering or quorum ?

Would it be a good idea ?