Re: no luck with kafka-connect on secure cluster

2016-11-26 Thread Koert Kuipers
ah okay that makes sense. also explains why for a distributed source i
actually has to set it twice:
security.protocol=SASL_PLAINTEXT
producer.security.protocol=SASL_PLAINTEXT

if anyone runs into this issue and just wants it to work... this is what is
in my configs now:
security.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
consumer.security.protocol=SASL_PLAINTEXT
consumer.sasl.enabled.mechanisms=GSSAPI
consumer.sasl.kerberos.service.name=kafka
producer.security.protocol=SASL_PLAINTEXT
producer.sasl.enabled.mechanisms=GSSAPI
producer.sasl.kerberos.service.name=kafka



On Sat, Nov 26, 2016 at 5:03 PM, Ewen Cheslack-Postava 
wrote:

> Koert,
>
> I think what you're seeing is that there are actually 3 different ways
> Connect can interact with Kafka. For both standalone and distributed mode,
> you have producers and consumers that are part of the source and sink
> connector implementations, respectively. Security for these are configured
> using the producer. and consumer. prefixed configurations in the worker
> config. In distributed mode, Connect also leverages Kafka's group
> membership protocol to coordinate the workers and distribute work between
> them. The security settings for these are picked up in the distributed
> worker config without any prefixes.
>
> For more info on configuring security, you can see Confluent's docs on that
> here: http://docs.confluent.io/3.1.1/connect/security.html#security
>
> We realize having to specify this multiple times is annoying if you want to
> use the same set of credentials, but for other configurations it is
> important to keep the configs for worker/producer/consumer isolated (such
> as interceptors, which use the same config name but different interfaces
> for ProducerInterceptor vs ConsumerInterceptor). For configs we know might
> be shared, we'd like to find a way to make this configuration simpler.
>
> -Ewen
>
> On Fri, Nov 25, 2016 at 10:51 AM, Koert Kuipers  wrote:
>
> > well it seems if you run connect in distributed mode... its again
> > security.protocol=SASL_PLAINTEXT and not producer.security.protocol=
> > SASL_PLAINTEXT
> >
> > dont ask me why
> >
> > On Thu, Nov 24, 2016 at 10:40 PM, Koert Kuipers 
> wrote:
> >
> > > for anyone that runs into this. turns out i also had to set:
> > > producer.security.protocol=SASL_PLAINTEXT
> > > producer.sasl.kerberos.service.name=kafka
> > >
> > >
> > > On Thu, Nov 24, 2016 at 8:54 PM, Koert Kuipers 
> > wrote:
> > >
> > >> i have a secure kafka 0.10.1 cluster using SASL_PLAINTEXT
> > >>
> > >> the kafka servers seem fine, and i can start console-consumer and
> > >> console-producer and i see the message i type in the producer pop up
> in
> > the
> > >> consumer. no problems so far.
> > >>
> > >> for example to start console-producer:
> > >> $ kinit
> > >> $ export KAFKA_OPTS="-Djava.security.auth.login.config=config/jaas.
> > conf"
> > >> $ bin/kafka-console-producer.sh --producer.config
> > >> config/producer.properties --topic test --broker-list
> > >> SASL_PLAINTEXT://somenode:9092
> > >>
> > >> but i am having no luck whatsoever with kafka-connect. i tried this:
> > >> $ kinit
> > >> $ export KAFKA_OPTS="-Djava.security.auth.login.config=config/jaas.
> > conf"
> > >> $ bin/connect-standalone.sh config/connect-standalone.properties
> > >> config/connect-console-source.properties
> > >>
> > >> my config/connect-console-source.properties is unchanged. my
> > >> config/connect-standalone has:
> > >>
> > >> bootstrap.servers=SASL_PLAINTEXT://somenode:9092
> > >> security.protocol=SASL_PLAINTEXT
> > >> sasl.kerberos.service.name=kafka
> > >> key.converter=org.apache.kafka.connect.json.JsonConverter
> > >> value.converter=org.apache.kafka.connect.json.JsonConverter
> > >> internal.key.converter=org.apache.kafka.connect.json.JsonConverter
> > >> internal.value.converter=org.apache.kafka.connect.json.JsonConverter
> > >> internal.key.converter.schemas.enable=false
> > >> internal.value.converter.schemas.enable=false
> > >> offset.storage.file.filename=/tmp/connect.offsets
> > >> offset.flush.interval.ms=1
> > >>
> > >> i get these logs in an infinite loop:
> > >> [2016-11-24 20:47:18,528] DEBUG Node -1 disconnected.
> > >> (org.apache.kafka.clients.NetworkClient:463)
> > >> [2016-11-24 20:47:18,528] WARN Bootstrap broker somenode:9092
> > >> disconnected (org.apache.kafka.clients.NetworkClient:568)
> > >> [2016-11-24 20:47:18,528] DEBUG Give up sending metadata request since
> > no
> > >> node is available (org.apache.kafka.clients.NetworkClient:625)
> > >> [2016-11-24 20:47:18,629] DEBUG Initialize connection to node -1 for
> > >> sending metadata request (org.apache.kafka.clients.NetworkClient:644)
> > >> [2016-11-24 20:47:18,629] DEBUG Initiating connection to node -1 at
> > >> somenode:9092. (org.apache.kafka.clients.NetworkClient:496)
> > >> [2016-11-24 20:47:18,631] DEBUG Created socket with SO_RCVBUF = 32768,
> > >> SO_SNDBUF = 124928, SO_TIMEOUT = 0 to node -1
>

Re: no luck with kafka-connect on secure cluster

2016-11-26 Thread Ewen Cheslack-Postava
Koert,

I think what you're seeing is that there are actually 3 different ways
Connect can interact with Kafka. For both standalone and distributed mode,
you have producers and consumers that are part of the source and sink
connector implementations, respectively. Security for these are configured
using the producer. and consumer. prefixed configurations in the worker
config. In distributed mode, Connect also leverages Kafka's group
membership protocol to coordinate the workers and distribute work between
them. The security settings for these are picked up in the distributed
worker config without any prefixes.

For more info on configuring security, you can see Confluent's docs on that
here: http://docs.confluent.io/3.1.1/connect/security.html#security

We realize having to specify this multiple times is annoying if you want to
use the same set of credentials, but for other configurations it is
important to keep the configs for worker/producer/consumer isolated (such
as interceptors, which use the same config name but different interfaces
for ProducerInterceptor vs ConsumerInterceptor). For configs we know might
be shared, we'd like to find a way to make this configuration simpler.

-Ewen

On Fri, Nov 25, 2016 at 10:51 AM, Koert Kuipers  wrote:

> well it seems if you run connect in distributed mode... its again
> security.protocol=SASL_PLAINTEXT and not producer.security.protocol=
> SASL_PLAINTEXT
>
> dont ask me why
>
> On Thu, Nov 24, 2016 at 10:40 PM, Koert Kuipers  wrote:
>
> > for anyone that runs into this. turns out i also had to set:
> > producer.security.protocol=SASL_PLAINTEXT
> > producer.sasl.kerberos.service.name=kafka
> >
> >
> > On Thu, Nov 24, 2016 at 8:54 PM, Koert Kuipers 
> wrote:
> >
> >> i have a secure kafka 0.10.1 cluster using SASL_PLAINTEXT
> >>
> >> the kafka servers seem fine, and i can start console-consumer and
> >> console-producer and i see the message i type in the producer pop up in
> the
> >> consumer. no problems so far.
> >>
> >> for example to start console-producer:
> >> $ kinit
> >> $ export KAFKA_OPTS="-Djava.security.auth.login.config=config/jaas.
> conf"
> >> $ bin/kafka-console-producer.sh --producer.config
> >> config/producer.properties --topic test --broker-list
> >> SASL_PLAINTEXT://somenode:9092
> >>
> >> but i am having no luck whatsoever with kafka-connect. i tried this:
> >> $ kinit
> >> $ export KAFKA_OPTS="-Djava.security.auth.login.config=config/jaas.
> conf"
> >> $ bin/connect-standalone.sh config/connect-standalone.properties
> >> config/connect-console-source.properties
> >>
> >> my config/connect-console-source.properties is unchanged. my
> >> config/connect-standalone has:
> >>
> >> bootstrap.servers=SASL_PLAINTEXT://somenode:9092
> >> security.protocol=SASL_PLAINTEXT
> >> sasl.kerberos.service.name=kafka
> >> key.converter=org.apache.kafka.connect.json.JsonConverter
> >> value.converter=org.apache.kafka.connect.json.JsonConverter
> >> internal.key.converter=org.apache.kafka.connect.json.JsonConverter
> >> internal.value.converter=org.apache.kafka.connect.json.JsonConverter
> >> internal.key.converter.schemas.enable=false
> >> internal.value.converter.schemas.enable=false
> >> offset.storage.file.filename=/tmp/connect.offsets
> >> offset.flush.interval.ms=1
> >>
> >> i get these logs in an infinite loop:
> >> [2016-11-24 20:47:18,528] DEBUG Node -1 disconnected.
> >> (org.apache.kafka.clients.NetworkClient:463)
> >> [2016-11-24 20:47:18,528] WARN Bootstrap broker somenode:9092
> >> disconnected (org.apache.kafka.clients.NetworkClient:568)
> >> [2016-11-24 20:47:18,528] DEBUG Give up sending metadata request since
> no
> >> node is available (org.apache.kafka.clients.NetworkClient:625)
> >> [2016-11-24 20:47:18,629] DEBUG Initialize connection to node -1 for
> >> sending metadata request (org.apache.kafka.clients.NetworkClient:644)
> >> [2016-11-24 20:47:18,629] DEBUG Initiating connection to node -1 at
> >> somenode:9092. (org.apache.kafka.clients.NetworkClient:496)
> >> [2016-11-24 20:47:18,631] DEBUG Created socket with SO_RCVBUF = 32768,
> >> SO_SNDBUF = 124928, SO_TIMEOUT = 0 to node -1
> (org.apache.kafka.common.netwo
> >> rk.Selector:327)
> >> [2016-11-24 20:47:18,631] DEBUG Completed connection to node -1
> >> (org.apache.kafka.clients.NetworkClient:476)
> >> [2016-11-24 20:47:18,730] DEBUG Sending metadata request
> >> {topics=[connect-test]} to node -1 (org.apache.kafka.clients.Netw
> >> orkClient:640)
> >> [2016-11-24 20:47:18,730] DEBUG Connection with somenode/192.168.1.54
> >> disconnected (org.apache.kafka.common.network.Selector:365)
> >> java.io.EOFException
> >> at org.apache.kafka.common.network.NetworkReceive.readFromReada
> >> bleChannel(NetworkReceive.java:83)
> >> at org.apache.kafka.common.network.NetworkReceive.readFrom(
> >> NetworkReceive.java:71)
> >> at org.apache.kafka.common.network.KafkaChannel.receive(KafkaCh
> >> annel.java:154)
> >> at org.apache.kafka.common.network.KafkaChanne

Re: no luck with kafka-connect on secure cluster

2016-11-25 Thread Koert Kuipers
well it seems if you run connect in distributed mode... its again
security.protocol=SASL_PLAINTEXT and not producer.security.protocol=
SASL_PLAINTEXT

dont ask me why

On Thu, Nov 24, 2016 at 10:40 PM, Koert Kuipers  wrote:

> for anyone that runs into this. turns out i also had to set:
> producer.security.protocol=SASL_PLAINTEXT
> producer.sasl.kerberos.service.name=kafka
>
>
> On Thu, Nov 24, 2016 at 8:54 PM, Koert Kuipers  wrote:
>
>> i have a secure kafka 0.10.1 cluster using SASL_PLAINTEXT
>>
>> the kafka servers seem fine, and i can start console-consumer and
>> console-producer and i see the message i type in the producer pop up in the
>> consumer. no problems so far.
>>
>> for example to start console-producer:
>> $ kinit
>> $ export KAFKA_OPTS="-Djava.security.auth.login.config=config/jaas.conf"
>> $ bin/kafka-console-producer.sh --producer.config
>> config/producer.properties --topic test --broker-list
>> SASL_PLAINTEXT://somenode:9092
>>
>> but i am having no luck whatsoever with kafka-connect. i tried this:
>> $ kinit
>> $ export KAFKA_OPTS="-Djava.security.auth.login.config=config/jaas.conf"
>> $ bin/connect-standalone.sh config/connect-standalone.properties
>> config/connect-console-source.properties
>>
>> my config/connect-console-source.properties is unchanged. my
>> config/connect-standalone has:
>>
>> bootstrap.servers=SASL_PLAINTEXT://somenode:9092
>> security.protocol=SASL_PLAINTEXT
>> sasl.kerberos.service.name=kafka
>> key.converter=org.apache.kafka.connect.json.JsonConverter
>> value.converter=org.apache.kafka.connect.json.JsonConverter
>> internal.key.converter=org.apache.kafka.connect.json.JsonConverter
>> internal.value.converter=org.apache.kafka.connect.json.JsonConverter
>> internal.key.converter.schemas.enable=false
>> internal.value.converter.schemas.enable=false
>> offset.storage.file.filename=/tmp/connect.offsets
>> offset.flush.interval.ms=1
>>
>> i get these logs in an infinite loop:
>> [2016-11-24 20:47:18,528] DEBUG Node -1 disconnected.
>> (org.apache.kafka.clients.NetworkClient:463)
>> [2016-11-24 20:47:18,528] WARN Bootstrap broker somenode:9092
>> disconnected (org.apache.kafka.clients.NetworkClient:568)
>> [2016-11-24 20:47:18,528] DEBUG Give up sending metadata request since no
>> node is available (org.apache.kafka.clients.NetworkClient:625)
>> [2016-11-24 20:47:18,629] DEBUG Initialize connection to node -1 for
>> sending metadata request (org.apache.kafka.clients.NetworkClient:644)
>> [2016-11-24 20:47:18,629] DEBUG Initiating connection to node -1 at
>> somenode:9092. (org.apache.kafka.clients.NetworkClient:496)
>> [2016-11-24 20:47:18,631] DEBUG Created socket with SO_RCVBUF = 32768,
>> SO_SNDBUF = 124928, SO_TIMEOUT = 0 to node -1 (org.apache.kafka.common.netwo
>> rk.Selector:327)
>> [2016-11-24 20:47:18,631] DEBUG Completed connection to node -1
>> (org.apache.kafka.clients.NetworkClient:476)
>> [2016-11-24 20:47:18,730] DEBUG Sending metadata request
>> {topics=[connect-test]} to node -1 (org.apache.kafka.clients.Netw
>> orkClient:640)
>> [2016-11-24 20:47:18,730] DEBUG Connection with somenode/192.168.1.54
>> disconnected (org.apache.kafka.common.network.Selector:365)
>> java.io.EOFException
>> at org.apache.kafka.common.network.NetworkReceive.readFromReada
>> bleChannel(NetworkReceive.java:83)
>> at org.apache.kafka.common.network.NetworkReceive.readFrom(
>> NetworkReceive.java:71)
>> at org.apache.kafka.common.network.KafkaChannel.receive(KafkaCh
>> annel.java:154)
>> at org.apache.kafka.common.network.KafkaChannel.read(KafkaChann
>> el.java:135)
>> at org.apache.kafka.common.network.Selector.pollSelectionKeys(
>> Selector.java:343)
>> at org.apache.kafka.common.network.Selector.poll(Selector.java:
>> 291)
>> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.
>> java:260)
>> at org.apache.kafka.clients.producer.internals.Sender.run(Sende
>> r.java:236)
>> at org.apache.kafka.clients.producer.internals.Sender.run(Sende
>> r.java:135)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> i tried different kafka-connect connectors, same result.
>>
>> any ideas? thanks!
>>
>
>


Re: no luck with kafka-connect on secure cluster

2016-11-24 Thread Koert Kuipers
for anyone that runs into this. turns out i also had to set:
producer.security.protocol=SASL_PLAINTEXT
producer.sasl.kerberos.service.name=kafka


On Thu, Nov 24, 2016 at 8:54 PM, Koert Kuipers  wrote:

> i have a secure kafka 0.10.1 cluster using SASL_PLAINTEXT
>
> the kafka servers seem fine, and i can start console-consumer and
> console-producer and i see the message i type in the producer pop up in the
> consumer. no problems so far.
>
> for example to start console-producer:
> $ kinit
> $ export KAFKA_OPTS="-Djava.security.auth.login.config=config/jaas.conf"
> $ bin/kafka-console-producer.sh --producer.config
> config/producer.properties --topic test --broker-list
> SASL_PLAINTEXT://somenode:9092
>
> but i am having no luck whatsoever with kafka-connect. i tried this:
> $ kinit
> $ export KAFKA_OPTS="-Djava.security.auth.login.config=config/jaas.conf"
> $ bin/connect-standalone.sh config/connect-standalone.properties
> config/connect-console-source.properties
>
> my config/connect-console-source.properties is unchanged. my
> config/connect-standalone has:
>
> bootstrap.servers=SASL_PLAINTEXT://somenode:9092
> security.protocol=SASL_PLAINTEXT
> sasl.kerberos.service.name=kafka
> key.converter=org.apache.kafka.connect.json.JsonConverter
> value.converter=org.apache.kafka.connect.json.JsonConverter
> internal.key.converter=org.apache.kafka.connect.json.JsonConverter
> internal.value.converter=org.apache.kafka.connect.json.JsonConverter
> internal.key.converter.schemas.enable=false
> internal.value.converter.schemas.enable=false
> offset.storage.file.filename=/tmp/connect.offsets
> offset.flush.interval.ms=1
>
> i get these logs in an infinite loop:
> [2016-11-24 20:47:18,528] DEBUG Node -1 disconnected.
> (org.apache.kafka.clients.NetworkClient:463)
> [2016-11-24 20:47:18,528] WARN Bootstrap broker somenode:9092 disconnected
> (org.apache.kafka.clients.NetworkClient:568)
> [2016-11-24 20:47:18,528] DEBUG Give up sending metadata request since no
> node is available (org.apache.kafka.clients.NetworkClient:625)
> [2016-11-24 20:47:18,629] DEBUG Initialize connection to node -1 for
> sending metadata request (org.apache.kafka.clients.NetworkClient:644)
> [2016-11-24 20:47:18,629] DEBUG Initiating connection to node -1 at
> somenode:9092. (org.apache.kafka.clients.NetworkClient:496)
> [2016-11-24 20:47:18,631] DEBUG Created socket with SO_RCVBUF = 32768,
> SO_SNDBUF = 124928, SO_TIMEOUT = 0 to node -1 (org.apache.kafka.common.
> network.Selector:327)
> [2016-11-24 20:47:18,631] DEBUG Completed connection to node -1
> (org.apache.kafka.clients.NetworkClient:476)
> [2016-11-24 20:47:18,730] DEBUG Sending metadata request
> {topics=[connect-test]} to node -1 (org.apache.kafka.clients.
> NetworkClient:640)
> [2016-11-24 20:47:18,730] DEBUG Connection with somenode/192.168.1.54
> disconnected (org.apache.kafka.common.network.Selector:365)
> java.io.EOFException
> at org.apache.kafka.common.network.NetworkReceive.
> readFromReadableChannel(NetworkReceive.java:83)
> at org.apache.kafka.common.network.NetworkReceive.
> readFrom(NetworkReceive.java:71)
> at org.apache.kafka.common.network.KafkaChannel.receive(
> KafkaChannel.java:154)
> at org.apache.kafka.common.network.KafkaChannel.read(
> KafkaChannel.java:135)
> at org.apache.kafka.common.network.Selector.
> pollSelectionKeys(Selector.java:343)
> at org.apache.kafka.common.network.Selector.poll(
> Selector.java:291)
> at org.apache.kafka.clients.NetworkClient.poll(
> NetworkClient.java:260)
> at org.apache.kafka.clients.producer.internals.Sender.run(
> Sender.java:236)
> at org.apache.kafka.clients.producer.internals.Sender.run(
> Sender.java:135)
> at java.lang.Thread.run(Thread.java:745)
>
> i tried different kafka-connect connectors, same result.
>
> any ideas? thanks!
>


no luck with kafka-connect on secure cluster

2016-11-24 Thread Koert Kuipers
i have a secure kafka 0.10.1 cluster using SASL_PLAINTEXT

the kafka servers seem fine, and i can start console-consumer and
console-producer and i see the message i type in the producer pop up in the
consumer. no problems so far.

for example to start console-producer:
$ kinit
$ export KAFKA_OPTS="-Djava.security.auth.login.config=config/jaas.conf"
$ bin/kafka-console-producer.sh --producer.config
config/producer.properties --topic test --broker-list
SASL_PLAINTEXT://somenode:9092

but i am having no luck whatsoever with kafka-connect. i tried this:
$ kinit
$ export KAFKA_OPTS="-Djava.security.auth.login.config=config/jaas.conf"
$ bin/connect-standalone.sh config/connect-standalone.properties
config/connect-console-source.properties

my config/connect-console-source.properties is unchanged. my
config/connect-standalone has:

bootstrap.servers=SASL_PLAINTEXT://somenode:9092
security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name=kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=1

i get these logs in an infinite loop:
[2016-11-24 20:47:18,528] DEBUG Node -1 disconnected.
(org.apache.kafka.clients.NetworkClient:463)
[2016-11-24 20:47:18,528] WARN Bootstrap broker somenode:9092 disconnected
(org.apache.kafka.clients.NetworkClient:568)
[2016-11-24 20:47:18,528] DEBUG Give up sending metadata request since no
node is available (org.apache.kafka.clients.NetworkClient:625)
[2016-11-24 20:47:18,629] DEBUG Initialize connection to node -1 for
sending metadata request (org.apache.kafka.clients.NetworkClient:644)
[2016-11-24 20:47:18,629] DEBUG Initiating connection to node -1 at
somenode:9092. (org.apache.kafka.clients.NetworkClient:496)
[2016-11-24 20:47:18,631] DEBUG Created socket with SO_RCVBUF = 32768,
SO_SNDBUF = 124928, SO_TIMEOUT = 0 to node -1
(org.apache.kafka.common.network.Selector:327)
[2016-11-24 20:47:18,631] DEBUG Completed connection to node -1
(org.apache.kafka.clients.NetworkClient:476)
[2016-11-24 20:47:18,730] DEBUG Sending metadata request
{topics=[connect-test]} to node -1
(org.apache.kafka.clients.NetworkClient:640)
[2016-11-24 20:47:18,730] DEBUG Connection with somenode/192.168.1.54
disconnected (org.apache.kafka.common.network.Selector:365)
java.io.EOFException
at
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
at
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at
org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
at
org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
at
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:343)
at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
at
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
at
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:236)
at
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:135)
at java.lang.Thread.run(Thread.java:745)

i tried different kafka-connect connectors, same result.

any ideas? thanks!