UNSUBSCIBE ME

2017-04-10 Thread Zac Harvey
UNSUBSCRIBE ME


(BTW this doesn't work)


Re: Mailing list docs are misleading/broken/outdated

2017-03-23 Thread Zac Harvey
Hi how do I unsubscribe from this mailist list, the details in the docs are not 
the correct set of steps.


From: Zac Harvey <zac.har...@welltok.com>
Sent: Thursday, March 23, 2017 10:03:48 AM
To: users@kafka.apache.org
Subject: Mailing list docs are misleading/broken/outdated

I am trying to unsubscribe to this mailing list.  When I go to:


https://kafka.apache.org/contact

Apache Kafka<https://kafka.apache.org/contact>
kafka.apache.org
Contact Mailing Lists. We have a few mailing lists hosted by Apache: 
users@kafka.apache.org: A list for general user questions about Kafka™. To 
subscribe, send an ...





The docs say:


"To unsubscribe from any of these you just change the word "subscribe" in the 
above to "unsubscribe"."

So if I try sending an email to users-unsubscr...@kafka.apache.org, I get SMTP 
errors/failures (is not a valid email address).

How do I unsubscribe?!


Re: Mailing list docs are misleading/broken/outdated

2017-03-23 Thread Zac Harvey
Hi, how do I unsubscribe from this mailing list.


From: Zac Harvey <zac.har...@welltok.com>
Sent: Thursday, March 23, 2017 10:03:48 AM
To: users@kafka.apache.org
Subject: Mailing list docs are misleading/broken/outdated

I am trying to unsubscribe to this mailing list.  When I go to:


https://kafka.apache.org/contact

Apache Kafka<https://kafka.apache.org/contact>
kafka.apache.org
Contact Mailing Lists. We have a few mailing lists hosted by Apache: 
users@kafka.apache.org: A list for general user questions about Kafka™. To 
subscribe, send an ...





The docs say:


"To unsubscribe from any of these you just change the word "subscribe" in the 
above to "unsubscribe"."

So if I try sending an email to users-unsubscr...@kafka.apache.org, I get SMTP 
errors/failures (is not a valid email address).

How do I unsubscribe?!


Re: Mailing list docs are misleading/broken/outdated

2017-03-23 Thread Zac Harvey
Any ideas?


From: Zac Harvey <zac.har...@welltok.com>
Sent: Thursday, March 23, 2017 10:03:48 AM
To: users@kafka.apache.org
Subject: Mailing list docs are misleading/broken/outdated

I am trying to unsubscribe to this mailing list.  When I go to:


https://kafka.apache.org/contact

Apache Kafka<https://kafka.apache.org/contact>
kafka.apache.org
Contact Mailing Lists. We have a few mailing lists hosted by Apache: 
users@kafka.apache.org: A list for general user questions about Kafka™. To 
subscribe, send an ...





The docs say:


"To unsubscribe from any of these you just change the word "subscribe" in the 
above to "unsubscribe"."

So if I try sending an email to users-unsubscr...@kafka.apache.org, I get SMTP 
errors/failures (is not a valid email address).

How do I unsubscribe?!


Mailing list docs are misleading/broken/outdated

2017-03-23 Thread Zac Harvey
I am trying to unsubscribe to this mailing list.  When I go to:


https://kafka.apache.org/contact

Apache Kafka
kafka.apache.org
Contact Mailing Lists. We have a few mailing lists hosted by Apache: 
users@kafka.apache.org: A list for general user questions about Kafka™. To 
subscribe, send an ...





The docs say:


"To unsubscribe from any of these you just change the word "subscribe" in the 
above to "unsubscribe"."

So if I try sending an email to users-unsubscr...@kafka.apache.org, I get SMTP 
errors/failures (is not a valid email address).

How do I unsubscribe?!


Console producer can connect locally but not locally

2017-01-25 Thread Zac Harvey
I have a single Kafka node at, say, IP address 1.2.3.4.  If I SSH into that 
node from 2 different terminal windows, and run the console consumer from 1 
terminal, and the console producer from another terminal, everything works 
great:


# Run the consumer from terminal 1

kafka-console-consumer.sh --zookeeper zkA:2181 --topic simpletest


# Run the producer from terminal 2

kafka-console-producer.sh --broker-list localhost:9092 --topic simpletest

# Now I can enter messages into terminal 2 and see them show up in terminal 1 
(the consumer)


If I kill the console producer, but leave the consumer running, and then SSH 
into a different server (say with IP address of 5.6.7.8), then run the 
producer, and then try to send a message to Kafka (so that the running consumer 
picks it up), I get the following warnings:


# Run the producer from terminal 2

kafka-console-producer.sh --broker-list 1.2.3.4:9092 --topic simpletest

# Now enter a simple string message to try and send to consumer running on Kafka
hello
[2017-01-25 22:27:21,439] WARN Bootstrap broker 1.2.3.4:9092 disconnected 
(org.apache.kafka.clients.NetworkClient)

[2017-01-25 22:27:21,439] WARN Bootstrap broker 1.2.3.4:9092 disconnected 
(org.apache.kafka.clients.NetworkClient)

[2017-01-25 22:27:21,439] WARN Bootstrap broker 1.2.3.4:9092 disconnected 
(org.apache.kafka.clients.NetworkClient)

...


These warnings keep being generated until I kill the producer. Most 
importantly, the message never arrives and the consumer (again, running on the 
Kafka node, terminal 1) never spits the "hello" message to the console/STDOUT.


I have confirmed that the server at 5.6.7.8 has network access to 1.2.3.4:9092:


telnet 1.2.3.4 9092

Trying 1.2.3.4...
Connected to 1.2.3.4.
Escape character is '^]'.

etc. So in summary, I can run the console consumer locally on the Kafka node, 
and I can run the console producer locally on the Kafka node, and they work 
fine. But if I have the console consumer running locally on Kafka, and then try 
to run + send messages from a remote console producer, I get producer warnings 
and the messages never arrive. I have confirmed that the remote producer has 
network access to the Kafka node at port 9092.


Any ideas as to what's going wrong and/or how I could troubleshoot?


Re: Kafka Streams: how can I get the name of the Processor when calling `KStream.process`

2017-01-18 Thread Zac Harvey
Sorry that last response was for a different thread - please ignore (and sorry!)


From: Michael Noll 
Sent: Wednesday, January 18, 2017 9:56:35 AM
To: users@kafka.apache.org
Subject: Re: Kafka Streams: how can I get the name of the Processor when 
calling `KStream.process`

Nicolas,

if I understand your question correctly you'd like to add further
operations after having called `KStream#process()`, which -- as you report
-- doesn't work because `process()` returns void.

If that's indeed the case, +1 to Damian's suggest to use
`KStream.transform()` instead of `KStream.process()`.

-Michael




On Wed, Jan 18, 2017 at 3:31 PM, Damian Guy  wrote:

> You could possibly also use KStream.transform(...)
>
> On Wed, 18 Jan 2017 at 14:22 Damian Guy  wrote:
>
> > Hi Nicolas,
> >
> > Good question! I'm not sure why it is a terminal operation, maybe one of
> > the original authors can chip in. However, you could probably work around
> > it by using TopologyBuilder.addProcessor(...) rather then
> KStream.process
> >
> > Thanks,
> > Damian
> >
> > On Wed, 18 Jan 2017 at 13:48 Nicolas Fouché  wrote:
> >
> > Hi,
> >
> > as far as I understand, calling `KStream.process` prevents the developer
> > from adding further operations to a `KStreamBuilder` [1], because its
> > return type is `void`. Good.
> >
> > But it also prevents the developer from adding operations to its
> superclass
> > `TopologyBuilder`. In my case I wanted to add a sink, and the parent of
> > this sink would be the name of the Processor that is created by
> > `KStream.process`. Is there any reason why this method does not return
> the
> > processor name [2] ? Is it because it would be a bad idea continuing
> > building my topology with the low-level API ?
> >
> > [1]
> >
> > https://github.com/confluentinc/examples/blob/3.
> 1.x/kafka-streams/src/test/java/io/confluent/examples/streams/
> MixAndMatchLambdaIntegrationTest.java%23L56
> > [2]
> >
> > https://github.com/apache/kafka/blob/b6011918fbc36bfaa465bdcc750e24
> 35985d9101/streams/src/main/java/org/apache/kafka/streams/
> kstream/internals/KStreamImpl.java#L391
> >
> >
> > Thanks.
> > Nicolas.
> >
> >
>


Re: Correlation Id errors for both console producer and consumer

2017-01-18 Thread Zac Harvey
Anybody ever seen this before? Anybody have any ideas as to where I can start 
troubleshooting?




From: Zac Harvey <zac.har...@welltok.com>
Sent: Tuesday, January 17, 2017 4:40:35 PM
To: users@kafka.apache.org
Subject: Re: Correlation Id errors for both console producer and consumer

Hi Jeff,


Versions:


Kafka: kafka_2.11-0.10.0.0
ZK: zookeeper-3.4.6

Let me know if you need any more details/info. Thanks!

-Zac


From: Jeff Widman <j...@netskope.com>
Sent: Tuesday, January 17, 2017 4:38:07 PM
To: users@kafka.apache.org
Subject: Re: Correlation Id errors for both console producer and consumer

What versions of Kafka and Zookeeper are you using?

On Tue, Jan 17, 2017 at 11:57 AM, Zac Harvey <zac.har...@welltok.com> wrote:

> I have 2 Kafkas backed by 3 ZK nodes. I want to test the Kafka nodes by
> running the kafka-console-producer and -consumer locally on each node.
>
> So I SSH into one of my Kafka brokers using 2 different terminals. In
> terminal #1 I run the consumer like so:
>
> /opt/kafka/bin/kafka-console-consumer.sh --zookeeper a.b.c.d:2181
> --topic test1
>
> Where a.b.c.d is the private IP of one of my 3 ZK nodes.
>
> Then in terminal #2 I run the producer like so:
>
> /opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092
> --topic test1
>
> I am able to start both the consumer and producer just fine without any
> issues.
>
> However, in the producer terminal, if I "fire" a message at the test1
> topic by entering some text (such as "hello") and hitting the ENTER key, I
> immediately begin seeing this:
>
> [2017-01-17 19:45:57,353] WARN Error while fetching metadata with
> correlation id 0 : {test1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.
> NetworkClient)
> [2017-01-17 19:45:57,372] WARN Error while fetching metadata with
> correlation id 1 : {test1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.
> NetworkClient)
> [2017-01-17 19:45:57,477] WARN Error while fetching metadata with
> correlation id 2 : {test1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.
> NetworkClient)
> [2017-01-17 19:45:57,582] WARN Error while fetching metadata with
> correlation id 3 : {test1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.
> NetworkClient)
> ...and it keeps going!
>
> And, in the consumer terminal, even though I don't get any errors when I
> start the consumer, after about 30 seconds I get the following warning
> message:
>
> [2017-01-17 19:46:07,292] WARN Fetching topic metadata with
> correlation id 1 for topics [Set(test1)] from broker
> [BrokerEndPoint(1,ip-x-y-z-w.ec2.internal,9092)] failed
> (kafka.client.ClientUtils$)
> java.nio.channels.ClosedChannelException
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
> at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:80)
> at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$
> doSend(SyncProducer.scala:79)
> at kafka.producer.SyncProducer.send(SyncProducer.scala:124)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94)
> at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(
> ConsumerFetcherManager.scala:66)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
>
> Interestingly, ip-x-y-z-w.ec2.internal is the private DNS for the other
> Kafka broker, so perhaps this is some kind of failure during interbroker
> communication?
>
> Any ideas as to what is going on here and what I can do to troubleshoot?
>
>
>


Re: Kafka Streams: how can I get the name of the Processor when calling `KStream.process`

2017-01-18 Thread Zac Harvey
Anybody ever seen this before? Anybody have any ideas as to where I can start 
troubleshooting?


From: Michael Noll 
Sent: Wednesday, January 18, 2017 9:56:35 AM
To: users@kafka.apache.org
Subject: Re: Kafka Streams: how can I get the name of the Processor when 
calling `KStream.process`

Nicolas,

if I understand your question correctly you'd like to add further
operations after having called `KStream#process()`, which -- as you report
-- doesn't work because `process()` returns void.

If that's indeed the case, +1 to Damian's suggest to use
`KStream.transform()` instead of `KStream.process()`.

-Michael




On Wed, Jan 18, 2017 at 3:31 PM, Damian Guy  wrote:

> You could possibly also use KStream.transform(...)
>
> On Wed, 18 Jan 2017 at 14:22 Damian Guy  wrote:
>
> > Hi Nicolas,
> >
> > Good question! I'm not sure why it is a terminal operation, maybe one of
> > the original authors can chip in. However, you could probably work around
> > it by using TopologyBuilder.addProcessor(...) rather then
> KStream.process
> >
> > Thanks,
> > Damian
> >
> > On Wed, 18 Jan 2017 at 13:48 Nicolas Fouché  wrote:
> >
> > Hi,
> >
> > as far as I understand, calling `KStream.process` prevents the developer
> > from adding further operations to a `KStreamBuilder` [1], because its
> > return type is `void`. Good.
> >
> > But it also prevents the developer from adding operations to its
> superclass
> > `TopologyBuilder`. In my case I wanted to add a sink, and the parent of
> > this sink would be the name of the Processor that is created by
> > `KStream.process`. Is there any reason why this method does not return
> the
> > processor name [2] ? Is it because it would be a bad idea continuing
> > building my topology with the low-level API ?
> >
> > [1]
> >
> > https://github.com/confluentinc/examples/blob/3.
> 1.x/kafka-streams/src/test/java/io/confluent/examples/streams/
> MixAndMatchLambdaIntegrationTest.java%23L56
> > [2]
> >
> > https://github.com/apache/kafka/blob/b6011918fbc36bfaa465bdcc750e24
> 35985d9101/streams/src/main/java/org/apache/kafka/streams/
> kstream/internals/KStreamImpl.java#L391
> >
> >
> > Thanks.
> > Nicolas.
> >
> >
>


Re: Correlation Id errors for both console producer and consumer

2017-01-17 Thread Zac Harvey
Hi Jeff,


Versions:


Kafka: kafka_2.11-0.10.0.0
ZK: zookeeper-3.4.6

Let me know if you need any more details/info. Thanks!

-Zac


From: Jeff Widman <j...@netskope.com>
Sent: Tuesday, January 17, 2017 4:38:07 PM
To: users@kafka.apache.org
Subject: Re: Correlation Id errors for both console producer and consumer

What versions of Kafka and Zookeeper are you using?

On Tue, Jan 17, 2017 at 11:57 AM, Zac Harvey <zac.har...@welltok.com> wrote:

> I have 2 Kafkas backed by 3 ZK nodes. I want to test the Kafka nodes by
> running the kafka-console-producer and -consumer locally on each node.
>
> So I SSH into one of my Kafka brokers using 2 different terminals. In
> terminal #1 I run the consumer like so:
>
> /opt/kafka/bin/kafka-console-consumer.sh --zookeeper a.b.c.d:2181
> --topic test1
>
> Where a.b.c.d is the private IP of one of my 3 ZK nodes.
>
> Then in terminal #2 I run the producer like so:
>
> /opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092
> --topic test1
>
> I am able to start both the consumer and producer just fine without any
> issues.
>
> However, in the producer terminal, if I "fire" a message at the test1
> topic by entering some text (such as "hello") and hitting the ENTER key, I
> immediately begin seeing this:
>
> [2017-01-17 19:45:57,353] WARN Error while fetching metadata with
> correlation id 0 : {test1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.
> NetworkClient)
> [2017-01-17 19:45:57,372] WARN Error while fetching metadata with
> correlation id 1 : {test1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.
> NetworkClient)
> [2017-01-17 19:45:57,477] WARN Error while fetching metadata with
> correlation id 2 : {test1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.
> NetworkClient)
> [2017-01-17 19:45:57,582] WARN Error while fetching metadata with
> correlation id 3 : {test1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.
> NetworkClient)
> ...and it keeps going!
>
> And, in the consumer terminal, even though I don't get any errors when I
> start the consumer, after about 30 seconds I get the following warning
> message:
>
> [2017-01-17 19:46:07,292] WARN Fetching topic metadata with
> correlation id 1 for topics [Set(test1)] from broker
> [BrokerEndPoint(1,ip-x-y-z-w.ec2.internal,9092)] failed
> (kafka.client.ClientUtils$)
> java.nio.channels.ClosedChannelException
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
> at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:80)
> at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$
> doSend(SyncProducer.scala:79)
> at kafka.producer.SyncProducer.send(SyncProducer.scala:124)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94)
> at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(
> ConsumerFetcherManager.scala:66)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
>
> Interestingly, ip-x-y-z-w.ec2.internal is the private DNS for the other
> Kafka broker, so perhaps this is some kind of failure during interbroker
> communication?
>
> Any ideas as to what is going on here and what I can do to troubleshoot?
>
>
>


Correlation Id errors for both console producer and consumer

2017-01-17 Thread Zac Harvey
I have 2 Kafkas backed by 3 ZK nodes. I want to test the Kafka nodes by running 
the kafka-console-producer and -consumer locally on each node.

So I SSH into one of my Kafka brokers using 2 different terminals. In terminal 
#1 I run the consumer like so:

/opt/kafka/bin/kafka-console-consumer.sh --zookeeper a.b.c.d:2181 --topic 
test1

Where a.b.c.d is the private IP of one of my 3 ZK nodes.

Then in terminal #2 I run the producer like so:

/opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 
--topic test1

I am able to start both the consumer and producer just fine without any issues.

However, in the producer terminal, if I "fire" a message at the test1 topic by 
entering some text (such as "hello") and hitting the ENTER key, I immediately 
begin seeing this:

[2017-01-17 19:45:57,353] WARN Error while fetching metadata with 
correlation id 0 : {test1=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
[2017-01-17 19:45:57,372] WARN Error while fetching metadata with 
correlation id 1 : {test1=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
[2017-01-17 19:45:57,477] WARN Error while fetching metadata with 
correlation id 2 : {test1=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
[2017-01-17 19:45:57,582] WARN Error while fetching metadata with 
correlation id 3 : {test1=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
...and it keeps going!

And, in the consumer terminal, even though I don't get any errors when I start 
the consumer, after about 30 seconds I get the following warning message:

[2017-01-17 19:46:07,292] WARN Fetching topic metadata with correlation id 
1 for topics [Set(test1)] from broker 
[BrokerEndPoint(1,ip-x-y-z-w.ec2.internal,9092)] failed 
(kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:80)
at 
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:79)
at kafka.producer.SyncProducer.send(SyncProducer.scala:124)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94)
at 
kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

Interestingly, ip-x-y-z-w.ec2.internal is the private DNS for the other Kafka 
broker, so perhaps this is some kind of failure during interbroker 
communication?

Any ideas as to what is going on here and what I can do to troubleshoot?




Re: Writing a customized principal builder for authorization

2016-11-30 Thread Zac Harvey
How do you then modify Kafka's searchable classpath to pick up this new 
principal.builder.class classfile from a JAR somewhere on the filesystem?


In other words, I change my server.properties to:


principal.builder.class=com.example.kafkautils.MyCustomKafkaPrincipalBuilder


How will Kafka be able to find that at startup?


From: Mayuresh Gharat 
Sent: Wednesday, November 30, 2016 12:51:14 PM
To: users@kafka.apache.org
Subject: Re: Writing a customized principal builder for authorization

"principal.builder.class" is the name of the property.

Thanks,

Mayuresh

On Wed, Nov 30, 2016 at 9:30 AM,  wrote:

> Hi Kriti,
>
> You will have to implement the Principal Builder interface and provide the
> full class path in broker config. I don't remember the exact config name
> right now, but you can search for some config by name
> "principalbuilder.class" in the broker configs.
>
> Once you do this, Kafka will automatically use your custom
> PrincipalBuilder class for generating the principal.
>
> The buildPrincipal() function in the PrincipalBuilder is where you will
> have to create the your custom Principal class object ( This custom
> principal class should implement Java principal interface) and this custom
> principal.getname() can return whatever name you want.
>
> Let me know if this helps.
>
> Thanks,
>
> Mayuresh
>
>
>
> Sent from my iPhone
>
> > On Nov 29, 2016, at 11:40 PM, Kiriti Sai 
> wrote:
> >
> > Hi,
> > Can anyone help me or point me to any resources that can be of help for
> > writing a customized principal builder to use in Authorization using
> ACLs?
> > I've enabled SSL authentication scheme for both clients and brokers but I
> > would like to change the principal name to just the original name and
> > Organizational unit instead of the complete defiant principal name for
> SSL.
> >
> > Thanka in advance for the help.
>



--
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: Messages intermittently get lost

2016-11-30 Thread Zac Harvey
Hi Martin, makes sense.


When I SSH into all 3 of my ZK nodes and run:


sh zkServer.sh status


All three of them give me the following output:


JMX enabled by default

zkServer.sh: 81: /opt/zookeeper/bin/zkEnv.sh: Syntax error: "(" unexpected 
(expecting "fi")


Looks like a bug in the ZK shell script?


Best,

Zac


From: Martin Gainty <mgai...@hotmail.com>
Sent: Tuesday, November 29, 2016 11:18:21 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Hi Zach


we dont know whats causing this intermittent problem..so lets Divide and 
Conquer each part of this problem individually starting at the source of the 
data feeds


Let us eliminate any potential problem with feeds from external sources


Once you verify the zookeeper feeds are 100% reliable lets move onto Kafka


Pingback when you have verifiable results from zookeeper feeds


Thanks

Martin
__



________
From: Zac Harvey <zac.har...@welltok.com>
Sent: Tuesday, November 29, 2016 10:46 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Does anybody have any idea why ZK might be to blame if messages sent by a Kafka 
producer fail to be received by a Kafka consumer?

____________
From: Zac Harvey <zac.har...@welltok.com>
Sent: Monday, November 28, 2016 9:07:41 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Thanks Martin, I will look at those links.


But you seem to be 100% confident that the problem is with ZooKeeper...can I 
ask why? What is it about my problem description that makes you think this is 
an issue with ZooKeeper?


From: Martin Gainty <mgai...@hotmail.com>
Sent: Friday, November 25, 2016 1:46:28 PM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost



____
From: Zac Harvey <zac.har...@welltok.com>
Sent: Friday, November 25, 2016 6:17 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Hi Martin,


My server.properties looks like this:


listeners=PLAINTEXT://0.0.0.0:9092

advertised.host.name=

broker.id=2

port=9092

num.partitions=4

zookeeper.connect=zkA:2181,zkB:2181,zkC:2181

MG>can you check status for each ZK Node in the quorum?

sh>$ZOOKEEPER_HOME/bin/zkServer.sh status

http://www.ibm.com/support/knowledgecenter/SSCRJU_4.0.0/com.ibm.streams.pd.doc/doc/containerstreamszookeeper.html
ZooKeeper problems and solutions - 
IBM<http://www.ibm.com/support/knowledgecenter/SSCRJU_4.0.0/com.ibm.streams.pd.doc/doc/containerstreamszookeeper.html>
www.ibm.com<http://www.ibm.com>
Use these solutions to resolve the problems that you might encounter with 
Apache ZooKeeper.




ZooKeeper problems and solutions - 
IBM<http://www.ibm.com/support/knowledgecenter/SSCRJU_4.0.0/com.ibm.streams.pd.doc/doc/containerstreamszookeeper.html>
ZooKeeper problems and solutions - 
IBM<http://www.ibm.com/support/knowledgecenter/SSCRJU_4.0.0/com.ibm.streams.pd.doc/doc/containerstreamszookeeper.html>
www.ibm.com<http://www.ibm.com>
Use these solutions to resolve the problems that you might encounter with 
Apache ZooKeeper.



www.ibm.com<http://www.ibm.com>
[http://upload.wikimedia.org/wikipedia/commons/thumb/5/51/IBM_logo.svg/200px-IBM_logo.svg.png]<http://www.ibm.com/>

IBM - United States<http://www.ibm.com/>
www.ibm.com<http://www.ibm.com>
For more than a century IBM has been dedicated to every client's success and to 
creating innovations that matter for the world



Use these solutions to resolve the problems that you might encounter with 
Apache ZooKeeper.



MG>*greetings from Plimoth Mass*

MG>M


num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

log.dirs=/tmp/kafka-logs

num.recovery.threads.per.data.dir=1

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=30

zookeeper.connection.timeout.ms=6000

delete.topic.enable=true

auto.leader.rebalance.enable=true


Above, 'zkA', 'zkB' and 'zkC' are defined in /etc/hosts and are valid ZK 
servers, and  is the public DNS of the EC2 (AWS) node that 
this Kafka is running on.


Anything look incorrect to you?


And yes, yesterday was a holiday, but there is only work! I'll celebrate one 
big, long holiday when I'm dead!


Thanks for any-and-all input here!


Best,

Zac



From: Martin Gainty <mgai...@hotmail.com>
Sent: Thursday, November 24, 2016 9:03:33 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Hi Zach


there is a rumour that today thursday is a holiday?

in server.properties how are you configuring your server?


specifically what are these attributes?


num.network.threads=


num.io.threads=


socket.send.buff

Re: Messages intermittently get lost

2016-11-29 Thread Zac Harvey
Does anybody have any idea why ZK might be to blame if messages sent by a Kafka 
producer fail to be received by a Kafka consumer?


From: Zac Harvey <zac.har...@welltok.com>
Sent: Monday, November 28, 2016 9:07:41 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Thanks Martin, I will look at those links.


But you seem to be 100% confident that the problem is with ZooKeeper...can I 
ask why? What is it about my problem description that makes you think this is 
an issue with ZooKeeper?


From: Martin Gainty <mgai...@hotmail.com>
Sent: Friday, November 25, 2016 1:46:28 PM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost



________
From: Zac Harvey <zac.har...@welltok.com>
Sent: Friday, November 25, 2016 6:17 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Hi Martin,


My server.properties looks like this:


listeners=PLAINTEXT://0.0.0.0:9092

advertised.host.name=

broker.id=2

port=9092

num.partitions=4

zookeeper.connect=zkA:2181,zkB:2181,zkC:2181

MG>can you check status for each ZK Node in the quorum?

sh>$ZOOKEEPER_HOME/bin/zkServer.sh status

http://www.ibm.com/support/knowledgecenter/SSCRJU_4.0.0/com.ibm.streams.pd.doc/doc/containerstreamszookeeper.html

ZooKeeper problems and solutions - 
IBM<http://www.ibm.com/support/knowledgecenter/SSCRJU_4.0.0/com.ibm.streams.pd.doc/doc/containerstreamszookeeper.html>
www.ibm.com<http://www.ibm.com>
Use these solutions to resolve the problems that you might encounter with 
Apache ZooKeeper.



MG>*greetings from Plimoth Mass*

MG>M


num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

log.dirs=/tmp/kafka-logs

num.recovery.threads.per.data.dir=1

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=30

zookeeper.connection.timeout.ms=6000

delete.topic.enable=true

auto.leader.rebalance.enable=true


Above, 'zkA', 'zkB' and 'zkC' are defined in /etc/hosts and are valid ZK 
servers, and  is the public DNS of the EC2 (AWS) node that 
this Kafka is running on.


Anything look incorrect to you?


And yes, yesterday was a holiday, but there is only work! I'll celebrate one 
big, long holiday when I'm dead!


Thanks for any-and-all input here!


Best,

Zac



From: Martin Gainty <mgai...@hotmail.com>
Sent: Thursday, November 24, 2016 9:03:33 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Hi Zach


there is a rumour that today thursday is a holiday?

in server.properties how are you configuring your server?


specifically what are these attributes?


num.network.threads=


num.io.threads=


socket.send.buffer.bytes=


socket.receive.buffer.bytes=


socket.request.max.bytes=


num.partitions=


num.recovery.threads.per.data.dir=


?

Martin
______



____
From: Zac Harvey <zac.har...@welltok.com>
Sent: Thursday, November 24, 2016 7:05 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Anybody?!? This is very disconcerting!


From: Zac Harvey <zac.har...@welltok.com>
Sent: Wednesday, November 23, 2016 5:07:45 AM
To: users@kafka.apache.org
Subject: Messages intermittently get lost

I am playing around with Kafka and have a simple setup:


* 1-node Kafka (Ubuntu) server

* 3-node ZK cluster (each on their own Ubuntu server)


I have a consumer written in Scala and am using the kafka-console-producer 
(v0.10) that ships with the distribution.


I'd say about 20% of the messages I send via the producer never get consumed by 
the Scala process (which is running continuously). No errors on either side 
(producer or consumer): the producer sends, and, nothing...


Any ideas as to what might be going on here, or how I could start 
troubleshooting?


Thanks!


Re: Messages intermittently get lost

2016-11-28 Thread Zac Harvey
Thanks Martin, I will look at those links.


But you seem to be 100% confident that the problem is with ZooKeeper...can I 
ask why? What is it about my problem description that makes you think this is 
an issue with ZooKeeper?


From: Martin Gainty <mgai...@hotmail.com>
Sent: Friday, November 25, 2016 1:46:28 PM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost




From: Zac Harvey <zac.har...@welltok.com>
Sent: Friday, November 25, 2016 6:17 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Hi Martin,


My server.properties looks like this:


listeners=PLAINTEXT://0.0.0.0:9092

advertised.host.name=

broker.id=2

port=9092

num.partitions=4

zookeeper.connect=zkA:2181,zkB:2181,zkC:2181

MG>can you check status for each ZK Node in the quorum?

sh>$ZOOKEEPER_HOME/bin/zkServer.sh status

http://www.ibm.com/support/knowledgecenter/SSCRJU_4.0.0/com.ibm.streams.pd.doc/doc/containerstreamszookeeper.html

ZooKeeper problems and solutions - 
IBM<http://www.ibm.com/support/knowledgecenter/SSCRJU_4.0.0/com.ibm.streams.pd.doc/doc/containerstreamszookeeper.html>
www.ibm.com<http://www.ibm.com>
Use these solutions to resolve the problems that you might encounter with 
Apache ZooKeeper.



MG>*greetings from Plimoth Mass*

MG>M


num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

log.dirs=/tmp/kafka-logs

num.recovery.threads.per.data.dir=1

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=30

zookeeper.connection.timeout.ms=6000

delete.topic.enable=true

auto.leader.rebalance.enable=true


Above, 'zkA', 'zkB' and 'zkC' are defined in /etc/hosts and are valid ZK 
servers, and  is the public DNS of the EC2 (AWS) node that 
this Kafka is running on.


Anything look incorrect to you?


And yes, yesterday was a holiday, but there is only work! I'll celebrate one 
big, long holiday when I'm dead!


Thanks for any-and-all input here!


Best,

Zac



From: Martin Gainty <mgai...@hotmail.com>
Sent: Thursday, November 24, 2016 9:03:33 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Hi Zach


there is a rumour that today thursday is a holiday?

in server.properties how are you configuring your server?


specifically what are these attributes?


num.network.threads=


num.io.threads=


socket.send.buffer.bytes=


socket.receive.buffer.bytes=


socket.request.max.bytes=


num.partitions=


num.recovery.threads.per.data.dir=


?

Martin
__



____
From: Zac Harvey <zac.har...@welltok.com>
Sent: Thursday, November 24, 2016 7:05 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Anybody?!? This is very disconcerting!

____
From: Zac Harvey <zac.har...@welltok.com>
Sent: Wednesday, November 23, 2016 5:07:45 AM
To: users@kafka.apache.org
Subject: Messages intermittently get lost

I am playing around with Kafka and have a simple setup:


* 1-node Kafka (Ubuntu) server

* 3-node ZK cluster (each on their own Ubuntu server)


I have a consumer written in Scala and am using the kafka-console-producer 
(v0.10) that ships with the distribution.


I'd say about 20% of the messages I send via the producer never get consumed by 
the Scala process (which is running continuously). No errors on either side 
(producer or consumer): the producer sends, and, nothing...


Any ideas as to what might be going on here, or how I could start 
troubleshooting?


Thanks!


Re: Messages intermittently get lost

2016-11-25 Thread Zac Harvey
Hi Martin,


My server.properties looks like this:


listeners=PLAINTEXT://0.0.0.0:9092

advertised.host.name=

broker.id=2

port=9092

num.partitions=4

zookeeper.connect=zkA:2181,zkB:2181,zkC:2181

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

log.dirs=/tmp/kafka-logs

num.recovery.threads.per.data.dir=1

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=30

zookeeper.connection.timeout.ms=6000

delete.topic.enable=true

auto.leader.rebalance.enable=true


Above, 'zkA', 'zkB' and 'zkC' are defined in /etc/hosts and are valid ZK 
servers, and  is the public DNS of the EC2 (AWS) node that 
this Kafka is running on.


Anything look incorrect to you?


And yes, yesterday was a holiday, but there is only work! I'll celebrate one 
big, long holiday when I'm dead!


Thanks for any-and-all input here!


Best,

Zac



From: Martin Gainty <mgai...@hotmail.com>
Sent: Thursday, November 24, 2016 9:03:33 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Hi Zach


there is a rumour that today thursday is a holiday?

in server.properties how are you configuring your server?


specifically what are these attributes?


num.network.threads=


num.io.threads=


socket.send.buffer.bytes=


socket.receive.buffer.bytes=


socket.request.max.bytes=


num.partitions=


num.recovery.threads.per.data.dir=


?

Martin
__




From: Zac Harvey <zac.har...@welltok.com>
Sent: Thursday, November 24, 2016 7:05 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Anybody?!? This is very disconcerting!

________
From: Zac Harvey <zac.har...@welltok.com>
Sent: Wednesday, November 23, 2016 5:07:45 AM
To: users@kafka.apache.org
Subject: Messages intermittently get lost

I am playing around with Kafka and have a simple setup:


* 1-node Kafka (Ubuntu) server

* 3-node ZK cluster (each on their own Ubuntu server)


I have a consumer written in Scala and am using the kafka-console-producer 
(v0.10) that ships with the distribution.


I'd say about 20% of the messages I send via the producer never get consumed by 
the Scala process (which is running continuously). No errors on either side 
(producer or consumer): the producer sends, and, nothing...


Any ideas as to what might be going on here, or how I could start 
troubleshooting?


Thanks!


Re: Messages intermittently get lost

2016-11-24 Thread Zac Harvey
Anybody?!? This is very disconcerting!


From: Zac Harvey <zac.har...@welltok.com>
Sent: Wednesday, November 23, 2016 5:07:45 AM
To: users@kafka.apache.org
Subject: Messages intermittently get lost

I am playing around with Kafka and have a simple setup:


* 1-node Kafka (Ubuntu) server

* 3-node ZK cluster (each on their own Ubuntu server)


I have a consumer written in Scala and am using the kafka-console-producer 
(v0.10) that ships with the distribution.


I'd say about 20% of the messages I send via the producer never get consumed by 
the Scala process (which is running continuously). No errors on either side 
(producer or consumer): the producer sends, and, nothing...


Any ideas as to what might be going on here, or how I could start 
troubleshooting?


Thanks!


Messages intermittently get lost

2016-11-23 Thread Zac Harvey
I am playing around with Kafka and have a simple setup:


* 1-node Kafka (Ubuntu) server

* 3-node ZK cluster (each on their own Ubuntu server)


I have a consumer written in Scala and am using the kafka-console-producer 
(v0.10) that ships with the distribution.


I'd say about 20% of the messages I send via the producer never get consumed by 
the Scala process (which is running continuously). No errors on either side 
(producer or consumer): the producer sends, and, nothing...


Any ideas as to what might be going on here, or how I could start 
troubleshooting?


Thanks!


Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-21 Thread Zac Harvey
Thanks again. So this might be very telling of the underlying problem:


I did what you suggested:


1) I disabled (actually deleted) the first rule; then

2) I changed the load balancer's second (which is now its only) rule to accept 
TCP:9093 and to translate that to TCP:9093, making the conneciton PLAINTEXT all 
the way through to Kafka; then

3) I tried connecting a Scala consumer to the load balancer URL 
(mybalancer01.example.com) and I'm getting that ClosedChannelException


For now there is only one Kafka broker sitting behind the load balancer. It's 
server.properties look like:


listeners=PLAINTEXT://:9093,SASL_PLAINTEXT://:9092

advertised.listeners=PLAINTEXT://mybalancer01.example.com:9093,SASL_PLAINTEXT://mykafka01.example.com:9092

advertised.host.name=mykafka01.example.com

security.inter.broker.protocol=SASL_PLAINTEXT

sasl.enabled.mechanisms=PLAIN

sasl.mechanism.inter.broker.protocol=PLAIN

broker.id=1

num.partitions=4

zookeeper.connect=zkA:2181,zkB:2181,zkC:2181

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

log.dirs=/tmp/kafka-logs

num.recovery.threads.per.data.dir=1

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=30

zookeeper.connection.timeout.ms=6000

offset.metadata.max.bytes=4096


Above, 'zkA', 'zkB' and 'zkC' are defined inside `/etc/hosts` and are valid 
server names.


And then inside the kafka-run-class.sh script, instead of the default:


if [ -z "$KAFKA_OPTS" ]; then

  KAFKA_OPTS=""

fi


I have:


if [ -z "$KAFKA_OPTS" ]; then

  KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/config/jaas.conf"

fi


I also added the /opt/kafka/config/jaas.conf file like you suggested, and only 
changed the names of users and passwords:


KafkaServer {

  org.apache.kafka.common.security.plain.PlainLoginModule required

  username="someuser"

  user_kafka="somePassword"

  password="kafka-password";

};


The fact that I can no longer even consume from a topic over PLAINTEXT (which 
is a regression of where I was before we started trying to add SSL) tells me 
there is something wrong in either server.properties or jaas.conf. I've checked 
the Kafka broker logs (server.log) each time I try connecting and this is the 
only line that gets printed:


[2016-11-21 15:18:14,859] INFO [Group Metadata Manager on Broker 2]: Removed 0 
expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)


Not sure if that means anything. Any idea where I might be going wrong? Thanks 
again!


From: Rajini Sivaram <rajinisiva...@googlemail.com>
Sent: Monday, November 21, 2016 11:03:14 AM
To: users@kafka.apache.org
Subject: Re: Can Kafka/SSL be terminated at a load balancer?

Rule #1 and Rule #2 cannot co-exist. You are basically configuring your LB
to point to a Kafka broker and you are pointing each Kafka broker to point
to a LB. So you need a pair of ports with a security protocol for the
connection to work. With two rules, Kafka picks up the wrong LB port for
one of the security protocols.

If you want to try without SSL first, the simplest way to try it out would
be to disable Rule #1 and change Rule #2 to use port 9093 instead of 9095.
Then you should be able to connect using PLAINTEXT (the test that is
currently not working).

I think you have the configuration:

advertised.listeners=PLAINTEXT://mybalancer01.example.com:9093
,SASL_PLAINTEXT://mykafka01.example.com:9092

And you have a client connecting with PLAINTEXT on mybalancer01:*9095*. The
first connection would work, but subsequent connections would use the
address provided by Kafka from advertised.listeners. The client  will start
connecting with PLAINTEXT on mybalancer01:*9093*, which is expecting SSL.
If you disable Rule #1 and change Rule #2 to use port 9093, you should be
able to test PLAINTEXT without changing Kafka config.

On Mon, Nov 21, 2016 at 3:32 PM, Zac Harvey <zac.har...@welltok.com> wrote:

> In the last email I should have mentioned: don't pay too much attention to
> the code snippet, and after reviewing it, I can see it actually incomplete
> (I forgot to include the section where I configure the topics and broker
> configs to talk to Kafka!).
>
>
> What I'm really concerned about is that before we added all these SSL
> configs, I had plaintext (plaintext:9092 in/out of the load balancer
> to/from Kafka) working fine. Now my consumer code can't even connect to the
> load balancer/Kafka.
>
>
> So I guess what I was really asking was: does that exception
> (ClosedChannelException) indicate bad configs on the Kafka broker?
>
> 
> From: Zac Harvey <zac.har...@welltok.com>
> Sent: Thursday, November 17, 2016 4:44:06 PM
> To: users@kafka.apache.org
> Subject: Can Kafka/SSL be terminat

Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-21 Thread Zac Harvey
In the last email I should have mentioned: don't pay too much attention to the 
code snippet, and after reviewing it, I can see it actually incomplete (I 
forgot to include the section where I configure the topics and broker configs 
to talk to Kafka!).


What I'm really concerned about is that before we added all these SSL configs, 
I had plaintext (plaintext:9092 in/out of the load balancer to/from Kafka) 
working fine. Now my consumer code can't even connect to the load 
balancer/Kafka.


So I guess what I was really asking was: does that exception 
(ClosedChannelException) indicate bad configs on the Kafka broker?


From: Zac Harvey <zac.har...@welltok.com>
Sent: Thursday, November 17, 2016 4:44:06 PM
To: users@kafka.apache.org
Subject: Can Kafka/SSL be terminated at a load balancer?

We have two Kafka nodes and for reasons outside of this question, would like to 
set up a load balancer to terminate SSL with producers (clients). The SSL cert 
hosted by the load balancer will be signed by trusted/root CA that clients 
should natively trust.


Is this possible to do, or does Kafka somehow require SSL to be setup directly 
on the Kafka servers themselves?


Thanks!


Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-21 Thread Zac Harvey
Thanks Rajini,


So I have implemented your solution (the the best of my knowledge) 100% as you 
have specified.


On the load balancer (AWS ELB) I have two load balancer rules:


Rule #1:

Incoming: SSL:9093

Outgoing (to Kafka): TCP:9093


Rule #2:

Incoming: TCP:9095

Outgoing (toKafka): TCP: 9093


I added the 2nd rule so that I could test this config/setup one step at a time, 
beginning with a Scala/Spark consumer that tries to connect to the load 
balancer over plaintext/9095. When I run this consumer/test code:


[code]

object ScalaTestConsumer {
def main(args: Array[String]): Unit = {
val jsonMapper : JsonScalaUtils = new JsonScalaUtils()
var jsonArgs = jsonMapper.toMap[String](args(0))
var kafkaConfigs = jsonMapper.toMap[String](args(1))

def messageConsumer(): StreamingContext = {
val ssc = new StreamingContext(SparkContext.getOrCreate(), 
Seconds(10))
val topic : String = jsonArgs("topic")

createKafkaStream(ssc, topic, kafkaConfigs).foreachRDD(rdd => {
rdd.collect().foreach { msg =>
try {
  jsonArgs += "inboundMessage" -> msg._2

  Processor.start(jsonArgs.asJava)
} catch {
case e @ (_: Exception | _: Error | _: Throwable) => {
println("Exception thrown: " + e.getMessage)
e.printStackTrace()
}
  }
}
})

ssc
}

StreamingContext.getActive.foreach {
_.stop(stopSparkContext = false)
}

val ssc = StreamingContext.getActiveOrCreate(messageConsumer)
ssc.start()
ssc.awaitTermination()
}

def createKafkaStream(ssc: StreamingContext,
kafkaTopics: String, kafkaConfigs: Map[String,String]): 
DStream[(String, String)] = {

KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder](ssc, 
kafkaConfigs, Set(kafkaTopics))
}
}
[/code]


I get this exception after 60 seconds:


org.apache.spark.SparkException: java.nio.channels.ClosedChannelException
at 
org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
 at 
org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
 at scala.util.Either.fold(Either.scala:97)




If I can get this working and properly connecting to ELB (hence consuming from 
Kafka) then I can run the kafka-console-producer and send a message to the 
topic, and see my Scala consumer process it.


Once I have that working, I can have the confidence to tell the other Ruby 
client (producers) to start trying to connect over port SSL:9093 and I will 
remove the second load balancer rule.


Any ideas as to why I'm getting this 'ClosedChannelException' exception, or how 
I could troubleshoot it?


Thanks again!


Best,

Zac


From: Rajini Sivaram <rajinisiva...@googlemail.com>
Sent: Monday, November 21, 2016 10:11:00 AM
To: users@kafka.apache.org
Subject: Re: Can Kafka/SSL be terminated at a load balancer?

A load balancer that balances the load across the brokers wouldn't work,
but here the LB is being used as a terminating SSL proxy. That should work
if each Kafka is configured with its own proxy.

On Mon, Nov 21, 2016 at 2:57 PM, tao xiao <xiaotao...@gmail.com> wrote:

> I doubt the LB solution will work for Kafka. Client needs to connect to the
> leader of a partition to produce/consume messages. If we put a LB in front
> of all brokers which means all brokers share the same LB how does the LB
> figure out the leader?
> On Mon, Nov 21, 2016 at 10:26 PM Martin Gainty <mgai...@hotmail.com>
> wrote:
>
> >
> >
> >
> >
> > 
> > From: Zac Harvey <zac.har...@welltok.com>
> > Sent: Monday, November 21, 2016 8:59 AM
> > To: users@kafka.apache.org
> > Subject: Re: Can Kafka/SSL be terminated at a load balancer?
> >
> > Thanks again Rajini,
> >
> >
> > Using these configs, would clients connect to the load balancer over
> > SSL/9093? And then would I configure the load balancer to forward traffic
> > from SSL/9093 to plaintext/9093?
> >
> > MG>Zach
> >
> > MG>i could be wrong but SSL port != plaintext port ..but consider:
> >
> > MG>consider recent testcase where all traffic around a certain location
> > gets bogged with DOS attacks
> >
> > MG>what are the legitimate role(s) of the LB when SSL Traffic and HTTP1.1
> > Traffic and FTP Traffic are ALL blocked?
> >
> > MG>LB should never be stripping SSL headers to redirect to PlainText
> > because you are not rerouting to

Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-21 Thread Zac Harvey
Thanks again Rajini,


Using these configs, would clients connect to the load balancer over SSL/9093? 
And then would I configure the load balancer to forward traffic from SSL/9093 
to plaintext/9093?


Thanks again, just still a little uncertain about the traffic/ports coming into 
the load balancer!


Best,

Zac


From: Rajini Sivaram <rajinisiva...@googlemail.com>
Sent: Monday, November 21, 2016 8:48:41 AM
To: users@kafka.apache.org
Subject: Re: Can Kafka/SSL be terminated at a load balancer?

Zac,

Yes, that is correct. Ruby clients will not be authenticated by Kafka. They
talk SSL to the load balancer and the load balancer uses PLAINTEXT without
authentication to talk to Kafka.

On Mon, Nov 21, 2016 at 1:29 PM, Zac Harvey <zac.har...@welltok.com> wrote:

> *Awesome* explanation Rajini - thank you!
>
>
> Just to confirm: the SASL/PLAIN configs would only be for the interbroker
> communication, correct? Meaning, beyond your recommended changes to
> server.properties, and the addition of the new jaas.conf file, the
> producers (Ruby clients) wouldn't need to authenticate, correct?
>
>
> Thanks again for all the great help so far, you've already helped me more
> than you know!
>
>
> Zac
>
> 
> From: Rajini Sivaram <rajinisiva...@googlemail.com>
> Sent: Monday, November 21, 2016 3:53:47 AM
> To: users@kafka.apache.org
> Subject: Re: Can Kafka/SSL be terminated at a load balancer?
>
> Zac,
>
> *advertised.listeners* is used to make client connections from
> producers/consumers as well as for client-side connections for inter-broker
> communication. In your scenario, setting it to *PLAINTEXT://mykafka01*
> would work for inter-broker, bypassing the load balancer, but clients would
> also then attempt to connect directly to *mykafka01*.  Setting it to
> *SSL://mybalancer01* would work for producers/consumers, but brokers would
> try to connect to *mybalancer01* using PLAINTEXT. Unfortunately neither
> works for both. You need two endpoints, one for inter-broker that bypasses
> *mybalancer01* and another for clients that uses *mybalancer01*. With the
> current Kafka configuration, you would require two security protocols to
> enable two endpoints.
>
> You could enable SSL in Kafka (using self-signed certificates if you need)
> for one of the two endpoints to overcome this limitation. But presumably
> you have a secure internal network running Kafka and want to avoid the cost
> of encryption in Kafka. The simplest solution I can think of is to use
> SASL_PLAINTEXT using SASL/PLAIN for inter-broker as a workaround. The
> configuration options in server.properties would look like:
>
> listeners=PLAINTEXT://:9093,SASL_PLAINTEXT://:9092
>
> advertised.listeners=PLAINTEXT://mybalancer01.example.com:9093
> ,SASL_PLAINTEXT://mykafka01.example.com:9092
>
> security.inter.broker.protocol=SASL_PLAINTEXT
>
> sasl.enabled.mechanisms=PLAIN
>
> sasl.mechanism.inter.broker.protocol=PLAIN
>
>
> You also need a JAAS configuration file configured for the broker JVM (
> *KAFKA_OPTS="-Djava.security.auth.login.config=/kafka/jaas.conf"*) . See
> https://kafka.apache.org/documentation#security_sasl for configuring
> SASL.*
> jaas.conf* would look something like:
>
> KafkaServer {
>
> org.apache.kafka.common.security.plain.PlainLoginModule required
>
> username="kafka"
>
> user_kafka="kafka-password"
>
> password="kafka-password";
>
> };
>
>
> Hope that helps.
>
>
> On Fri, Nov 18, 2016 at 6:39 PM, Zac Harvey <zac.har...@welltok.com>
> wrote:
>
> > Thanks again Rajini!
> >
> >
> > One last followup question, if you don't mind. You said that my
> > server.properties file should look something like this:
> >
> >
> > listeners=SSL://:9093
> > advertised.listeners=SSL://mybalancer01.example.com:9093
> > security.inter.broker.protocol=SSL
> >
> > However, please remember that I'm looking for the load balancer to
> > terminate SSL, meaning that (my desired) communication between the load
> > balancer and Kafka would be over plaintext (not SSL).  In other words:
> >
> > Ruby Producers/Clients <SSL:9093> Load Balancer <
> > Plaintext:9092 > Kafka
> >
> > So producers/client connect to the load balancer over SSL and port 9093,
> > but then the load balancer communicates with Kafka over plaintext and
> port
> > 9092.
> >
> > I also don't need inter broker communication to be SSL; it can be
> > plaintext.
> >
> > If this is the case, do I still need to change s

Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-21 Thread Zac Harvey
*Awesome* explanation Rajini - thank you!


Just to confirm: the SASL/PLAIN configs would only be for the interbroker 
communication, correct? Meaning, beyond your recommended changes to 
server.properties, and the addition of the new jaas.conf file, the producers 
(Ruby clients) wouldn't need to authenticate, correct?


Thanks again for all the great help so far, you've already helped me more than 
you know!


Zac


From: Rajini Sivaram <rajinisiva...@googlemail.com>
Sent: Monday, November 21, 2016 3:53:47 AM
To: users@kafka.apache.org
Subject: Re: Can Kafka/SSL be terminated at a load balancer?

Zac,

*advertised.listeners* is used to make client connections from
producers/consumers as well as for client-side connections for inter-broker
communication. In your scenario, setting it to *PLAINTEXT://mykafka01*
would work for inter-broker, bypassing the load balancer, but clients would
also then attempt to connect directly to *mykafka01*.  Setting it to
*SSL://mybalancer01* would work for producers/consumers, but brokers would
try to connect to *mybalancer01* using PLAINTEXT. Unfortunately neither
works for both. You need two endpoints, one for inter-broker that bypasses
*mybalancer01* and another for clients that uses *mybalancer01*. With the
current Kafka configuration, you would require two security protocols to
enable two endpoints.

You could enable SSL in Kafka (using self-signed certificates if you need)
for one of the two endpoints to overcome this limitation. But presumably
you have a secure internal network running Kafka and want to avoid the cost
of encryption in Kafka. The simplest solution I can think of is to use
SASL_PLAINTEXT using SASL/PLAIN for inter-broker as a workaround. The
configuration options in server.properties would look like:

listeners=PLAINTEXT://:9093,SASL_PLAINTEXT://:9092

advertised.listeners=PLAINTEXT://mybalancer01.example.com:9093
,SASL_PLAINTEXT://mykafka01.example.com:9092

security.inter.broker.protocol=SASL_PLAINTEXT

sasl.enabled.mechanisms=PLAIN

sasl.mechanism.inter.broker.protocol=PLAIN


You also need a JAAS configuration file configured for the broker JVM (
*KAFKA_OPTS="-Djava.security.auth.login.config=/kafka/jaas.conf"*) . See
https://kafka.apache.org/documentation#security_sasl for configuring SASL.*
jaas.conf* would look something like:

KafkaServer {

org.apache.kafka.common.security.plain.PlainLoginModule required

username="kafka"

user_kafka="kafka-password"

password="kafka-password";

};


Hope that helps.


On Fri, Nov 18, 2016 at 6:39 PM, Zac Harvey <zac.har...@welltok.com> wrote:

> Thanks again Rajini!
>
>
> One last followup question, if you don't mind. You said that my
> server.properties file should look something like this:
>
>
> listeners=SSL://:9093
> advertised.listeners=SSL://mybalancer01.example.com:9093
> security.inter.broker.protocol=SSL
>
> However, please remember that I'm looking for the load balancer to
> terminate SSL, meaning that (my desired) communication between the load
> balancer and Kafka would be over plaintext (not SSL).  In other words:
>
> Ruby Producers/Clients <SSL:9093> Load Balancer <
> Plaintext:9092 > Kafka
>
> So producers/client connect to the load balancer over SSL and port 9093,
> but then the load balancer communicates with Kafka over plaintext and port
> 9092.
>
> I also don't need inter broker communication to be SSL; it can be
> plaintext.
>
> If this is the case, do I still need to change server.properties, or can I
> leave it like so:
>
> listeners=plaintext://:9092
> advertised.listeners=plaintext://mybalancer01.example.com:9092
>
> Or could it just be:
>
> listeners=plaintext://:9092
> advertised.listeners=plaintext://mykafka01.example.com:9092
>
> Thanks again!
> Zac
>
>
>
>
>
> 
> From: Rajini Sivaram <rajinisiva...@googlemail.com>
> Sent: Friday, November 18, 2016 9:57:22 AM
> To: users@kafka.apache.org
> Subject: Re: Can Kafka/SSL be terminated at a load balancer?
>
> You should set advertised.listeners rather than the older
> advertised.host.name property in server.properties:
>
>
>- listeners=SSL://:9093
>- advertised.listeners=SSL://mybalancer01.example.com:9093
>- security.inter.broker.protocol=SSL
>
>
> If your listeners are on particular interfaces, you can set address in the
> 'listeners' property too.
>
>
> If you want inter-broker communication to bypass the SSL proxy, you would
> need another security protocol that can be used for inter-broker
> communication (PLAINTEXT in the example below).
>
>
>
>- listeners=SSL://:9093,PLAINTEXT://:9092
>- advertised.listeners=SSL://mybala

Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-18 Thread Zac Harvey
Thanks again Rajini!


One last followup question, if you don't mind. You said that my 
server.properties file should look something like this:


listeners=SSL://:9093
advertised.listeners=SSL://mybalancer01.example.com:9093
security.inter.broker.protocol=SSL

However, please remember that I'm looking for the load balancer to terminate 
SSL, meaning that (my desired) communication between the load balancer and 
Kafka would be over plaintext (not SSL).  In other words:

Ruby Producers/Clients <SSL:9093> Load Balancer < Plaintext:9092 
> Kafka

So producers/client connect to the load balancer over SSL and port 9093, but 
then the load balancer communicates with Kafka over plaintext and port 9092.

I also don't need inter broker communication to be SSL; it can be plaintext.

If this is the case, do I still need to change server.properties, or can I 
leave it like so:

listeners=plaintext://:9092
advertised.listeners=plaintext://mybalancer01.example.com:9092

Or could it just be:

listeners=plaintext://:9092
advertised.listeners=plaintext://mykafka01.example.com:9092

Thanks again!
Zac






From: Rajini Sivaram <rajinisiva...@googlemail.com>
Sent: Friday, November 18, 2016 9:57:22 AM
To: users@kafka.apache.org
Subject: Re: Can Kafka/SSL be terminated at a load balancer?

You should set advertised.listeners rather than the older
advertised.host.name property in server.properties:


   - listeners=SSL://:9093
   - advertised.listeners=SSL://mybalancer01.example.com:9093
   - security.inter.broker.protocol=SSL


If your listeners are on particular interfaces, you can set address in the
'listeners' property too.


If you want inter-broker communication to bypass the SSL proxy, you would
need another security protocol that can be used for inter-broker
communication (PLAINTEXT in the example below).



   - listeners=SSL://:9093,PLAINTEXT://:9092
   - advertised.listeners=SSL://mybalancer01.example.com:9093,PLAINTEXT://
   mykafka01.example.com:9092
   - security.inter.broker.protocol=PLAINTEXT

 I haven't used the Ruby clients, so I am not sure about client
configuration. With Java clients, if you don't specify truststore, the
default trust stores are used, so with trusted CA-signed certificates, no
additional client configuration is required. You can test your installation
using the console producer and consumer that are shipped with Kafka to make
sure it is working before you run with Ruby clients.



On Fri, Nov 18, 2016 at 1:23 PM, Zac Harvey <zac.har...@welltok.com> wrote:

>
> Thanks Rajini,
>
>
> So currently one of our Kafka nodes is 'mykafka01.example.com', and in
> its server.properties file, I have advertised.host.name=mykafka01
> .example.com. Our load balancer lives at mybalancer01.example.com, and
> this what producers will connect to (over SSL) to send messages to Kafka.
>
>
> It sounds like you're saying I need to change my Kafka node's
> server.properties to have advertised.host.name=mybalancer01.example.com,
> yes? If not, can you perhaps provide a quick snippet of the changes I would
> need to make to server.properties?
>
>
> Again, the cert served by the balancer will be a highly-trusted (root
> CA-signed) certificate that all clients will natively trust. Interestingly
> enough, most (if not all) the Kafka producers/clients will be written in
> Ruby (using the zendesk Kafka-Ruby gem<https://github.com/
> zendesk/ruby-kafka>), so there wont be any JKS configuration options
> available for those Ruby clients.
>
>
> Besides making the change to server.properties that I mentioned above, are
> there any other client-side configs that will need to be made for the Ruby
> clients to connect over SSL?
>
>
> Thank you enormously here!
>
>
> Best,
>
> Zac
>
>
> 
> From: Rajini Sivaram <rajinisiva...@googlemail.com>
> Sent: Friday, November 18, 2016 5:15:13 AM
> To: users@kafka.apache.org
> Subject: Re: Can Kafka/SSL be terminated at a load balancer?
>
> Zac,
>
> Kafka has its own built-in load-balancing mechanism based on partition
> assignment. Requests are processed by partition leaders, distributing load
> across the brokers in the cluster. If you want to put a proxy like HAProxy
> with SSL termination in front of your brokers for added security, you can
> do that. You can have completely independent trust chain between
> clients->proxy and proxy->broker. You need to configure Kafka brokers with
> the proxy host as the host in the advertised listeners for the security
> protocol used by clients.
>
> On Thu, Nov 17, 2016 at 9:44 PM, Zac Harvey <zac.har...@welltok.com>
> wrote:
>
> > We have two Kafka nodes and for reasons outside of this question, would
> > like to set up a load balancer to

Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-18 Thread Zac Harvey

Thanks Rajini,


So currently one of our Kafka nodes is 'mykafka01.example.com', and in its 
server.properties file, I have advertised.host.name=mykafka01.example.com. Our 
load balancer lives at mybalancer01.example.com, and this what producers will 
connect to (over SSL) to send messages to Kafka.


It sounds like you're saying I need to change my Kafka node's server.properties 
to have advertised.host.name=mybalancer01.example.com, yes? If not, can you 
perhaps provide a quick snippet of the changes I would need to make to 
server.properties?


Again, the cert served by the balancer will be a highly-trusted (root 
CA-signed) certificate that all clients will natively trust. Interestingly 
enough, most (if not all) the Kafka producers/clients will be written in Ruby 
(using the zendesk Kafka-Ruby gem<https://github.com/zendesk/ruby-kafka>), so 
there wont be any JKS configuration options available for those Ruby clients.


Besides making the change to server.properties that I mentioned above, are 
there any other client-side configs that will need to be made for the Ruby 
clients to connect over SSL?


Thank you enormously here!


Best,

Zac



From: Rajini Sivaram <rajinisiva...@googlemail.com>
Sent: Friday, November 18, 2016 5:15:13 AM
To: users@kafka.apache.org
Subject: Re: Can Kafka/SSL be terminated at a load balancer?

Zac,

Kafka has its own built-in load-balancing mechanism based on partition
assignment. Requests are processed by partition leaders, distributing load
across the brokers in the cluster. If you want to put a proxy like HAProxy
with SSL termination in front of your brokers for added security, you can
do that. You can have completely independent trust chain between
clients->proxy and proxy->broker. You need to configure Kafka brokers with
the proxy host as the host in the advertised listeners for the security
protocol used by clients.

On Thu, Nov 17, 2016 at 9:44 PM, Zac Harvey <zac.har...@welltok.com> wrote:

> We have two Kafka nodes and for reasons outside of this question, would
> like to set up a load balancer to terminate SSL with producers (clients).
> The SSL cert hosted by the load balancer will be signed by trusted/root CA
> that clients should natively trust.
>
>
> Is this possible to do, or does Kafka somehow require SSL to be setup
> directly on the Kafka servers themselves?
>
>
> Thanks!
>



--
Regards,

Rajini


Can Kafka/SSL be terminated at a load balancer?

2016-11-17 Thread Zac Harvey
We have two Kafka nodes and for reasons outside of this question, would like to 
set up a load balancer to terminate SSL with producers (clients). The SSL cert 
hosted by the load balancer will be signed by trusted/root CA that clients 
should natively trust.


Is this possible to do, or does Kafka somehow require SSL to be setup directly 
on the Kafka servers themselves?


Thanks!