Re: Kafka Connect - HDFS or FileStream

2019-05-15 Thread Vinay Jain
Appreciate if somebody has experience on the above and respond to the same.
Logically it should work if we have a property set hdfs.url =
file:///home/user/data



On Wed, May 15, 2019 at 5:06 AM Vinay Jain  wrote:

> Redirecting to a file will not work, we would not be able to create a
> different filenames after some time or filesize. HDFS connect sink already
> has those options , also i could use some minor transformations while using
> the connect
>
> On Tue, May 14, 2019 at 7:28 AM Hans Jespersen  wrote:
>
>> Can you just use kafka-console-consumer and just redirect the output into
>> a
>> file?
>>
>> -hans
>>
>>
>> On Mon, May 13, 2019 at 1:55 PM Vinay Jain  wrote:
>>
>> > Hi
>> >
>> > The data needs to be transferred to some other system in other network,
>> and
>> > due to some security reasons, the other systems cannot be exposed . So
>> the
>> > available mechanism is file based integration. Is there a production
>> ready
>> > Kafka connect adapter which can create files in local directory.
>> >
>> > Regards
>> > Vinay Jain
>> >
>> > On Mon, May 13, 2019 at 3:41 PM Robin Moffatt 
>> wrote:
>> >
>> > > Can you explain more about why you're writing a file with the data?
>> > > Presumably, this is for another application to consume; could it not
>> take
>> > > the data from Kafka directly, whether with a native client or over the
>> > REST
>> > > proxy?
>> > > Oftentimes local files are unnecessary 'duck tape' for integration
>> that
>> > can
>> > > be done in a better way.
>> > >
>> > >
>> > > --
>> > >
>> > > Robin Moffatt | Developer Advocate | ro...@confluent.io | @rmoff
>> > >
>> > >
>> > > On Mon, 13 May 2019 at 01:35, Vinay Jain  wrote:
>> > >
>> > > > Hi
>> > > >
>> > > > I would like to consume a Topic and save the AVRO messages in local
>> > > > directory in the AVRO file formats. As per Kafka Connect File Stream
>> > > > documentation, it is not for production use.
>> > > >
>> > > > Other Option I am thinking to use Kafka Connect - HDFS Sink but I am
>> > not
>> > > > sure if it can also write to the Local directory if we pass in the
>> > > variable
>> > > > hdfs.url the URL for local file system instead of HDFS path.
>> > > >
>> > > > Will this work or are there any other ready made options which can
>> be
>> > > used
>> > > > for the same.
>> > > >
>> > > > Regards
>> > > >
>> > > > Vinay
>> > > >
>> > >
>> >
>>
>


Re: Performance Testing Using Consumer-Perf-Test

2019-05-15 Thread Hans Jespersen
1) Are all 10 publishers producing to the same topic? What level of ACKs do
you have set? How many partitions are in your topic? Are all 10 consumers
in the same consumer group or are they supposed to be independent consumers
that each get the full set of messages published?
2) Depends what you are measuring (latency, throughput, or something else)?
If you publish first then your consumer has to consume either from the
beginning or in real-time and will only get messages published AFTER it
successfully subscribes. With 10 consumers you could generate a lot of
rebalancing if you don't start and balance them ahead of time.

-hans





On Wed, May 15, 2019 at 8:45 AM M. Manna  wrote:

> Hello,
>
> I am trying to do some performance testing using Kafka-Consumer-Perf-Test.
> Could somone please help me understand whether my setup is correct?
>
> 1) I would like to run a benchmark test to have 10 publishers publishing
> 100 messages (4MB) each and 10 subscribers.
>
> 2) For the above, do I need to run PRoducer first and then Consumer? Or, is
> it okay just to run consumer-perf-test ?
>
> Thanks,
>


Multiple state store files

2019-05-15 Thread Parthasarathy, Mohan
Hi,

I am seeing multiple state store files (.sst) created under each 
 directory. Is this normal ? If an application crashes 
and comes back, would it cause this ?

Thanks
Mohan



Re: SASL + SSL : authentication error in broker-to-broker communication

2019-05-15 Thread Kieran JOYEUX
Hello Martin,

First of all, thanks for your help on this matter.

However, pardon me but I don't understand the correlation between my kafka 
certs and these authentication problems ? Could you detail it please ?

Regarding kafka-configs.sh, I did use the same user/password as the one in the 
jaas files, which are identical on each broker thanks to Puppet epp templating. 
Seeing my ps faux, you can see that Kafka is using the jaas file as 
documentation is advising : 
-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf

I also checked that every zookeeper is able to answer the same salt, server_key 
for each user described.

Anything else to check ?

Thanks a lot.

Sincerely,

Kieran



From: Martin Gainty 
Sent: Wednesday, May 15, 2019 2:28 PM
To: users@kafka.apache.org
Subject: Re: SASL + SSL : authentication error in broker-to-broker communication

assuming ScramSaslProvider/ScramSaslServer your credentials are stored in ZK 
/config/users/
but you cannot see plain-text attributes in ZK so use kafka tool to view
kafka-configs.sh -describe /config/users/

/*2019 update for kafka-configs.sh */

For ease of use, kafka-configs.sh will take a password and an optional 
iteration count and generate a random salt, ServerKey and StoredKey as 
specified in in RFC 5802. For example:

bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 
'SCRAM-SHA-256=[iterations=4096,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]'
 --entity-type users --entity-name alice

/*once you have verified username,password from  ZK credentials */
you can now export your cert from /opt/kafka/ssl/kafka.server.keystore.jks
keytool -exportcert -alias admin -keystore 
/opt/kafka/ssl/kafka.server.keystore.jks -keypass  -storepass  -file 
admin.cert

(note the storepass is for truststore!)

if you can view the admin.cert with cert-viewer and validate username(subject) 
are consistent with ZK creds
if you dont have cert-viewer you can convert to pem
 openssl pkcs12 -export -in "admin.p12" -out "admin.pem"
check UID in either cert or pem is consistent with ZK

finally check zk credentials are propagated to jaas.conf

#used by interbroker connections
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="";
}

if there is consistency for all entities in
username
password
then your kafka-broker(s) *should* authenticate (assuming they all reference 
the same ZK server!)

bon chance
https://cwiki.apache.org/confluence/display/KAFKA/KIP-84%3A+Support+SASL+SCRAM+mechanisms#KIP-84:SupportSASLSCRAMmechanisms-JAASconfiguration
KIP-84: Support SASL SCRAM mechanisms - Apache Kafka - Apache Software 
Foundation - 
Dashboard
This Confluence has been LDAP enabled, if you are an ASF Committer, please use 
your LDAP Credentials to login. Any problems file an INFRA jira ticket please.
cwiki.apache.org





From: Kieran JOYEUX 
Sent: Wednesday, May 15, 2019 4:42 AM
To: users@kafka.apache.org
Subject: SASL + SSL : authentication error in broker-to-broker communication

Hello,

I'm facing trouble activating SASL on my currrent  working SSL only cluster. I 
have read the doc many times and my configuration seems to be good. However, 
It's like Kafka cannot authenticate and broker to broker communication is not 
working at all.

Do you have any ideas ? (Descriptions below)

Thanks a lot.

Kieran



# Versions
Kafka: 2.2.0
Zookeeper: 3.4.9-3+deb9u1

# Error message
[2019-05-15 10:14:00,811] DEBUG Set SASL server state to 
HANDSHAKE_OR_VERSIONS_REQUEST during authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,811] DEBUG Handling Kafka request API_VERSIONS during 
authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,811] DEBUG Set SASL server state to HANDSHAKE_REQUEST 
during authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,812] DEBUG Handling Kafka request SASL_HANDSHAKE during 
authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,812] DEBUG Using SASL mechanism 'SCRAM-SHA-512' provided 
by client 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,813] DEBUG Setting SASL/SCRAM_SHA_512 server state to 
RECEIVE_CLIENT_FIRST_MESSAGE 
(org.apache.kafka.common.security.scram.internals.ScramSaslServer)
[2019-05-15 10:14:00,813] DEBUG Set SASL server state to AUTHENTICATE during 
authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,814] DEB

Performance Testing Using Consumer-Perf-Test

2019-05-15 Thread M. Manna
Hello,

I am trying to do some performance testing using Kafka-Consumer-Perf-Test.
Could somone please help me understand whether my setup is correct?

1) I would like to run a benchmark test to have 10 publishers publishing
100 messages (4MB) each and 10 subscribers.

2) For the above, do I need to run PRoducer first and then Consumer? Or, is
it okay just to run consumer-perf-test ?

Thanks,


Re: SASL + SSL : authentication error in broker-to-broker communication

2019-05-15 Thread Martin Gainty
assuming ScramSaslProvider/ScramSaslServer your credentials are stored in ZK 
/config/users/
but you cannot see plain-text attributes in ZK so use kafka tool to view
kafka-configs.sh -describe /config/users/

/*2019 update for kafka-configs.sh */

For ease of use, kafka-configs.sh will take a password and an optional 
iteration count and generate a random salt, ServerKey and StoredKey as 
specified in in RFC 5802. For example:

bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 
'SCRAM-SHA-256=[iterations=4096,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]'
 --entity-type users --entity-name alice

/*once you have verified username,password from  ZK credentials */
you can now export your cert from /opt/kafka/ssl/kafka.server.keystore.jks
keytool -exportcert -alias admin -keystore 
/opt/kafka/ssl/kafka.server.keystore.jks -keypass  -storepass  -file 
admin.cert

(note the storepass is for truststore!)

if you can view the admin.cert with cert-viewer and validate username(subject) 
are consistent with ZK creds
if you dont have cert-viewer you can convert to pem
 openssl pkcs12 -export -in "admin.p12" -out "admin.pem"
check UID in either cert or pem is consistent with ZK

finally check zk credentials are propagated to jaas.conf

#used by interbroker connections
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="";
}

if there is consistency for all entities in
username
password
then your kafka-broker(s) *should* authenticate (assuming they all reference 
the same ZK server!)

bon chance
https://cwiki.apache.org/confluence/display/KAFKA/KIP-84%3A+Support+SASL+SCRAM+mechanisms#KIP-84:SupportSASLSCRAMmechanisms-JAASconfiguration
KIP-84: Support SASL SCRAM mechanisms - Apache Kafka - Apache Software 
Foundation - 
Dashboard
This Confluence has been LDAP enabled, if you are an ASF Committer, please use 
your LDAP Credentials to login. Any problems file an INFRA jira ticket please.
cwiki.apache.org





From: Kieran JOYEUX 
Sent: Wednesday, May 15, 2019 4:42 AM
To: users@kafka.apache.org
Subject: SASL + SSL : authentication error in broker-to-broker communication

Hello,

I'm facing trouble activating SASL on my currrent  working SSL only cluster. I 
have read the doc many times and my configuration seems to be good. However, 
It's like Kafka cannot authenticate and broker to broker communication is not 
working at all.

Do you have any ideas ? (Descriptions below)

Thanks a lot.

Kieran



# Versions
Kafka: 2.2.0
Zookeeper: 3.4.9-3+deb9u1

# Error message
[2019-05-15 10:14:00,811] DEBUG Set SASL server state to 
HANDSHAKE_OR_VERSIONS_REQUEST during authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,811] DEBUG Handling Kafka request API_VERSIONS during 
authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,811] DEBUG Set SASL server state to HANDSHAKE_REQUEST 
during authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,812] DEBUG Handling Kafka request SASL_HANDSHAKE during 
authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,812] DEBUG Using SASL mechanism 'SCRAM-SHA-512' provided 
by client 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,813] DEBUG Setting SASL/SCRAM_SHA_512 server state to 
RECEIVE_CLIENT_FIRST_MESSAGE 
(org.apache.kafka.common.security.scram.internals.ScramSaslServer)
[2019-05-15 10:14:00,813] DEBUG Set SASL server state to AUTHENTICATE during 
authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,814] DEBUG Setting SASL/SCRAM_SHA_512 server state to 
FAILED (org.apache.kafka.common.security.scram.internals.ScramSaslServer)
[2019-05-15 10:14:00,814] DEBUG Set SASL server state to FAILED during 
authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,814] INFO [SocketServer brokerId=2] Failed authentication 
with 10.101.60.15 (Authentication failed during authentication due to invalid 
credentials with SASL mechanism SCRAM-SHA-512) 
(org.apache.kafka.common.network.Selector)
[2019-05-15 10:14:00,815] DEBUG [SocketServer brokerId=2] Connection with 
10.101.60.15 disconnected (org.apache.kafka.common.network.Selector)
java.io.EOFException
at 
org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:573)
at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:94)
at 
org.apache.kafka.common.security.authenticator.S

SASL + SSL : authentication error in broker-to-broker communication

2019-05-15 Thread Kieran JOYEUX
Hello,

I'm facing trouble activating SASL on my currrent  working SSL only cluster. I 
have read the doc many times and my configuration seems to be good. However, 
It's like Kafka cannot authenticate and broker to broker communication is not 
working at all.

Do you have any ideas ? (Descriptions below)

Thanks a lot.

Kieran



# Versions
Kafka: 2.2.0
Zookeeper: 3.4.9-3+deb9u1

# Error message
[2019-05-15 10:14:00,811] DEBUG Set SASL server state to 
HANDSHAKE_OR_VERSIONS_REQUEST during authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,811] DEBUG Handling Kafka request API_VERSIONS during 
authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,811] DEBUG Set SASL server state to HANDSHAKE_REQUEST 
during authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,812] DEBUG Handling Kafka request SASL_HANDSHAKE during 
authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,812] DEBUG Using SASL mechanism 'SCRAM-SHA-512' provided 
by client 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,813] DEBUG Setting SASL/SCRAM_SHA_512 server state to 
RECEIVE_CLIENT_FIRST_MESSAGE 
(org.apache.kafka.common.security.scram.internals.ScramSaslServer)
[2019-05-15 10:14:00,813] DEBUG Set SASL server state to AUTHENTICATE during 
authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,814] DEBUG Setting SASL/SCRAM_SHA_512 server state to 
FAILED (org.apache.kafka.common.security.scram.internals.ScramSaslServer)
[2019-05-15 10:14:00,814] DEBUG Set SASL server state to FAILED during 
authentication 
(org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-05-15 10:14:00,814] INFO [SocketServer brokerId=2] Failed authentication 
with 10.101.60.15 (Authentication failed during authentication due to invalid 
credentials with SASL mechanism SCRAM-SHA-512) 
(org.apache.kafka.common.network.Selector)
[2019-05-15 10:14:00,815] DEBUG [SocketServer brokerId=2] Connection with 
10.101.60.15 disconnected (org.apache.kafka.common.network.Selector)
java.io.EOFException
at 
org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:573)
at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:94)
at 
org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:267)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:536)
at org.apache.kafka.common.network.Selector.poll(Selector.java:472)
at kafka.network.Processor.poll(SocketServer.scala:830)
at kafka.network.Processor.run(SocketServer.scala:730)
at java.lang.Thread.run(Thread.java:748)


# User creation in ZK & output
/opt/kafka/bin/kafka-configs.sh --zookeeper :2181 --alter --add-config 
'SCRAM-SHA-512=[password=]' --entity-type users --entity-name admin
entity-name admin
Configs for user-principal 'admin' are 
SCRAM-SHA-512=salt=bnBicjI4NWd5dDBweGJoMmJ1bnlzdzFxYQ==,stored_key=x,server_key=xx==,iterations=4096


# ps fauxww
kafka 2523  7.1 15.9 5838668 972848 ?  Ssl  mai14  52:46 java -Xmx1G 
-Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
-XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent 
-Djava.awt.headless=true -Xloggc:/var/log/kafka/kafkaServer-gc.log -verbose:gc 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M 
-Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.port=9990 
-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf 
-Djava.rmi.server.hostname=x -Dkafka.logs.dir=/var/log/kafka 
-Dlog4j.configuration=file:/opt/kafka/config/log4j.properties -cp 
/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b42.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/kafka/bin/../libs/connect-api-2.2.0.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-2.2.0.jar:/opt/kafka/bin/../libs/connect-file-2.2.0.jar:/opt/kafka/bin/../libs/connect-json-2.2.0.jar:/opt/kafka/bin/../libs/connect-runtime-2.2.0.jar:/opt/kafka/bin/../libs/connect-transforms-2.2.0.jar:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b42.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b42.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b42.jar:/opt/kafka/bin/../libs/jackson-annotations-2.9.8.jar:/opt/kafka/bin/../li