[jira] [Commented] (KAFKA-8128) Dynamic JAAS change for clients

2023-08-17 Thread Gabor Somogyi (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17755543#comment-17755543
 ] 

Gabor Somogyi commented on KAFKA-8128:
--

Sounds great.

> Dynamic JAAS change for clients
> ---
>
> Key: KAFKA-8128
> URL: https://issues.apache.org/jira/browse/KAFKA-8128
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Gabor Somogyi
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>
> Clients using JAAS based authentication are now forced to restart themselves 
> in order to reload the JAAS configuration. We could
> - make the {{sasl.jaas.config}} dynamically configurable and therefore better 
> equip them to changing tokens etc.
> - detect file system level changes in configured JAAS and reload the context.
> Original issue:
> Re-authentication feature on broker side is under implementation which will 
> enforce consumer/producer instances to re-authenticate time to time. It would 
> be good to set the latest delegation token dynamically and not re-creating 
> consumer/producer instances.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-8128) Dynamic delegation token change possibility for consumer/producer

2023-07-31 Thread Gabor Somogyi (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17749087#comment-17749087
 ] 

Gabor Somogyi commented on KAFKA-8128:
--

It took some time to remember what was the original issue since the jira 
reporter not written it down clearly
Now I remember that JAAS context is definitely not re-loaded which is the root 
cause. As I remember there is a manual JAAS context reload possibility in the 
JVM but it didn't have effect on the producer/consumer. The last statement may 
or may not be valid since it was 4 years ago...

All in all it would be good to have an explicit API to change the token which 
would solve many issues for sure.

> Dynamic delegation token change possibility for consumer/producer
> -
>
> Key: KAFKA-8128
> URL: https://issues.apache.org/jira/browse/KAFKA-8128
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Gabor Somogyi
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>
> Re-authentication feature on broker side is under implementation which will 
> enforce consumer/producer instances to re-authenticate time to time. It would 
> be good to set the latest delegation token dynamically and not re-creating 
> consumer/producer instances.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-10318) Default API timeout must be enforced to be greater than request timeout just like in AdminClient

2020-07-28 Thread Gabor Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated KAFKA-10318:
--
Affects Version/s: 2.5.0

> Default API timeout must be enforced to be greater than request timeout just 
> like in AdminClient
> 
>
> Key: KAFKA-10318
> URL: https://issues.apache.org/jira/browse/KAFKA-10318
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Gabor Somogyi
>Priority: Major
>
> https://github.com/apache/kafka/blob/66563e712b0b9f84f673b262f2fb87c03110084d/clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java#L545-L555



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10318) Default API timeout must be enforced to be greater than request timeout just like in AdminClient

2020-07-28 Thread Gabor Somogyi (Jira)
Gabor Somogyi created KAFKA-10318:
-

 Summary: Default API timeout must be enforced to be greater than 
request timeout just like in AdminClient
 Key: KAFKA-10318
 URL: https://issues.apache.org/jira/browse/KAFKA-10318
 Project: Kafka
  Issue Type: Bug
Reporter: Gabor Somogyi


https://github.com/apache/kafka/blob/66563e712b0b9f84f673b262f2fb87c03110084d/clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java#L545-L555




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10318) Default API timeout must be enforced to be greater than request timeout just like in AdminClient

2020-07-28 Thread Gabor Somogyi (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17166472#comment-17166472
 ] 

Gabor Somogyi commented on KAFKA-10318:
---

cc [~viktorsomogyi]

> Default API timeout must be enforced to be greater than request timeout just 
> like in AdminClient
> 
>
> Key: KAFKA-10318
> URL: https://issues.apache.org/jira/browse/KAFKA-10318
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Gabor Somogyi
>Priority: Major
>
> https://github.com/apache/kafka/blob/66563e712b0b9f84f673b262f2fb87c03110084d/clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java#L545-L555



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8468) AdminClient.deleteTopics doesn't wait until topic is deleted

2019-08-26 Thread Gabor Somogyi (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915623#comment-16915623
 ] 

Gabor Somogyi commented on KAFKA-8468:
--

[~viktorsomogyi] thanks for investing your time! Related your finding that was 
my guess as well.

Let me know if you've solved this and I'll remove ZKUtils from Spark code.

 

> AdminClient.deleteTopics doesn't wait until topic is deleted
> 
>
> Key: KAFKA-8468
> URL: https://issues.apache.org/jira/browse/KAFKA-8468
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.2.0, 2.3.0, 2.2.1, 2.4.0
>Reporter: Gabor Somogyi
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>
> Please see the example app to reproduce the issue: 
> https://github.com/gaborgsomogyi/kafka-topic-stress
> ZKUtils is deprecated from Kafka version 2.0.0 but there is no real 
> alternative.
> * deleteTopics doesn't wait
> * ZookeeperClient has "private [kafka]" visibility



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8809) Infinite retry if secure cluster tried to be reached from non-secure consumer

2019-08-16 Thread Gabor Somogyi (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909033#comment-16909033
 ] 

Gabor Somogyi commented on KAFKA-8809:
--

cc [~viktorsomogyi]

> Infinite retry if secure cluster tried to be reached from non-secure consumer
> -
>
> Key: KAFKA-8809
> URL: https://issues.apache.org/jira/browse/KAFKA-8809
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.3.0, 2.4.0
>Reporter: Gabor Somogyi
>Priority: Critical
>
> Such case the following happening without throwing exception:
> {code:java}
> 19/08/15 04:10:44 INFO AppInfoParser: Kafka version: 2.3.0
> 19/08/15 04:10:44 INFO AppInfoParser: Kafka commitId: fc1aaa116b661c8a
> 19/08/15 04:10:44 INFO AppInfoParser: Kafka startTimeMs: 1565867444977
> 19/08/15 04:10:44 INFO KafkaConsumer: [Consumer clientId=consumer-1, 
> groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
>  Subscribed to topic(s): topic-68f2c4c2-71a4-4380-a7c4-6fe0b9eea7ef
> 19/08/15 04:10:44 INFO Selector: [SocketServer brokerId=0] Failed 
> authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
> during SASL handshake.)
> 19/08/15 04:10:45 WARN NetworkClient: [Consumer clientId=consumer-1, 
> groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
>  Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
> 19/08/15 04:10:45 INFO Selector: [SocketServer brokerId=0] Failed 
> authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
> during SASL handshake.)
> 19/08/15 04:10:45 WARN NetworkClient: [Consumer clientId=consumer-1, 
> groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
>  Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
> 19/08/15 04:10:45 INFO Selector: [SocketServer brokerId=0] Failed 
> authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
> during SASL handshake.)
> 19/08/15 04:10:46 WARN NetworkClient: [Consumer clientId=consumer-1, 
> groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
>  Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
> 19/08/15 04:10:46 INFO Selector: [SocketServer brokerId=0] Failed 
> authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
> during SASL handshake.)
> 19/08/15 04:10:46 WARN NetworkClient: [Consumer clientId=consumer-1, 
> groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
>  Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
> 19/08/15 04:10:46 INFO Selector: [SocketServer brokerId=0] Failed 
> authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
> during SASL handshake.)
> 19/08/15 04:10:46 WARN NetworkClient: [Consumer clientId=consumer-1, 
> groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
>  Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
> 19/08/15 04:10:46 INFO Selector: [SocketServer brokerId=0] Failed 
> authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
> during SASL handshake.)
> 19/08/15 04:10:47 WARN NetworkClient: [Consumer clientId=consumer-1, 
> groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
>  Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
> 19/08/15 04:10:47 INFO Selector: [SocketServer brokerId=0] Failed 
> authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
> during SASL handshake.)
> 19/08/15 04:10:47 WARN NetworkClient: [Consumer clientId=consumer-1, 
> groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
>  Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
> 19/08/15 04:10:47 INFO Selector: [SocketServer brokerId=0] Failed 
> authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
> during SASL handshake.)
> {code}
> I've tried to find a timeout or retry count but nothing helped.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8809) Infinite retry if secure cluster tried to be reached from non-secure consumer

2019-08-16 Thread Gabor Somogyi (JIRA)
Gabor Somogyi created KAFKA-8809:


 Summary: Infinite retry if secure cluster tried to be reached from 
non-secure consumer
 Key: KAFKA-8809
 URL: https://issues.apache.org/jira/browse/KAFKA-8809
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.3.0, 2.4.0
Reporter: Gabor Somogyi


Such case the following happening without throwing exception:
{code:java}
19/08/15 04:10:44 INFO AppInfoParser: Kafka version: 2.3.0
19/08/15 04:10:44 INFO AppInfoParser: Kafka commitId: fc1aaa116b661c8a
19/08/15 04:10:44 INFO AppInfoParser: Kafka startTimeMs: 1565867444977
19/08/15 04:10:44 INFO KafkaConsumer: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Subscribed to topic(s): topic-68f2c4c2-71a4-4380-a7c4-6fe0b9eea7ef
19/08/15 04:10:44 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
19/08/15 04:10:45 WARN NetworkClient: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
19/08/15 04:10:45 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
19/08/15 04:10:45 WARN NetworkClient: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
19/08/15 04:10:45 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
19/08/15 04:10:46 WARN NetworkClient: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
19/08/15 04:10:46 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
19/08/15 04:10:46 WARN NetworkClient: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
19/08/15 04:10:46 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
19/08/15 04:10:46 WARN NetworkClient: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
19/08/15 04:10:46 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
19/08/15 04:10:47 WARN NetworkClient: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
19/08/15 04:10:47 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
19/08/15 04:10:47 WARN NetworkClient: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
19/08/15 04:10:47 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
{code}
I've tried to find a timeout or retry count but nothing helped.




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8468) AdminClient.deleteTopics doesn't wait until topic is deleted

2019-07-03 Thread Gabor Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated KAFKA-8468:
-
Affects Version/s: 2.3.0

> AdminClient.deleteTopics doesn't wait until topic is deleted
> 
>
> Key: KAFKA-8468
> URL: https://issues.apache.org/jira/browse/KAFKA-8468
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.2.0, 2.3.0, 2.2.1
>Reporter: Gabor Somogyi
>Priority: Major
>
> Please see the example app to reproduce the issue: 
> https://github.com/gaborgsomogyi/kafka-topic-stress
> ZKUtils is deprecated from Kafka version 2.0.0 but there is no real 
> alternative.
> * deleteTopics doesn't wait
> * ZookeeperClient has "private [kafka]" visibility



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8468) AdminClient.deleteTopics doesn't wait until topic is deleted

2019-06-03 Thread Gabor Somogyi (JIRA)
Gabor Somogyi created KAFKA-8468:


 Summary: AdminClient.deleteTopics doesn't wait until topic is 
deleted
 Key: KAFKA-8468
 URL: https://issues.apache.org/jira/browse/KAFKA-8468
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.2.1, 2.2.0
Reporter: Gabor Somogyi


Please see the example app to reproduce the issue: 
https://github.com/gaborgsomogyi/kafka-topic-stress

ZKUtils is deprecated from Kafka version 2.0.0 but there is no real alternative.
* deleteTopics doesn't wait
* ZookeeperClient has "private [kafka]" visibility




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8234) Multi-module support for JAAS config property

2019-04-15 Thread Gabor Somogyi (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16817935#comment-16817935
 ] 

Gabor Somogyi commented on KAFKA-8234:
--

cc [~viktorsomogyi]

> Multi-module support for JAAS config property
> -
>
> Key: KAFKA-8234
> URL: https://issues.apache.org/jira/browse/KAFKA-8234
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gabor Somogyi
>Priority: Major
>
> I've tried to add multi-modules to JAAS config property but its not supported 
> at the moment:
> {code:java}
> Exception in thread "main" org.apache.kafka.common.KafkaException: Failed 
> create new KafkaAdminClient
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:370)
>   at 
> org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:52)
>   at 
> com.kafka.delegationtoken.consumer.SecureKafkaConsumer$.main(SecureKafkaConsumer.scala:96)
>   at 
> com.kafka.delegationtoken.consumer.SecureKafkaConsumer.main(SecureKafkaConsumer.scala)
> Caused by: java.lang.IllegalArgumentException: JAAS config property contains 
> 2 login modules, should be 1 module
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:95)
>   at 
> org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:84)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:119)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
>   at 
> org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:346)
>   ... 3 more
> {code}
> I wanted to implement a fallback scenario with sufficient LoginModule flag 
> but the missing multi-module support makes in impossible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8234) Multi-module support for JAAS config property

2019-04-15 Thread Gabor Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated KAFKA-8234:
-
Affects Version/s: 2.2.0

> Multi-module support for JAAS config property
> -
>
> Key: KAFKA-8234
> URL: https://issues.apache.org/jira/browse/KAFKA-8234
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Gabor Somogyi
>Priority: Major
>
> I've tried to add multi-modules to JAAS config property but its not supported 
> at the moment:
> {code:java}
> Exception in thread "main" org.apache.kafka.common.KafkaException: Failed 
> create new KafkaAdminClient
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:370)
>   at 
> org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:52)
>   at 
> com.kafka.delegationtoken.consumer.SecureKafkaConsumer$.main(SecureKafkaConsumer.scala:96)
>   at 
> com.kafka.delegationtoken.consumer.SecureKafkaConsumer.main(SecureKafkaConsumer.scala)
> Caused by: java.lang.IllegalArgumentException: JAAS config property contains 
> 2 login modules, should be 1 module
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:95)
>   at 
> org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:84)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:119)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
>   at 
> org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:346)
>   ... 3 more
> {code}
> I wanted to implement a fallback scenario with sufficient LoginModule flag 
> but the missing multi-module support makes in impossible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8234) Multi-module support for JAAS config property

2019-04-15 Thread Gabor Somogyi (JIRA)
Gabor Somogyi created KAFKA-8234:


 Summary: Multi-module support for JAAS config property
 Key: KAFKA-8234
 URL: https://issues.apache.org/jira/browse/KAFKA-8234
 Project: Kafka
  Issue Type: Improvement
Reporter: Gabor Somogyi


I've tried to add multi-modules to JAAS config property but its not supported 
at the moment:
{code:java}
Exception in thread "main" org.apache.kafka.common.KafkaException: Failed 
create new KafkaAdminClient
at 
org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:370)
at 
org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:52)
at 
com.kafka.delegationtoken.consumer.SecureKafkaConsumer$.main(SecureKafkaConsumer.scala:96)
at 
com.kafka.delegationtoken.consumer.SecureKafkaConsumer.main(SecureKafkaConsumer.scala)
Caused by: java.lang.IllegalArgumentException: JAAS config property contains 2 
login modules, should be 1 module
at 
org.apache.kafka.common.security.JaasContext.load(JaasContext.java:95)
at 
org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:84)
at 
org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:119)
at 
org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at 
org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at 
org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:346)
... 3 more
{code}
I wanted to implement a fallback scenario with sufficient LoginModule flag but 
the missing multi-module support makes in impossible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8128) Dynamic delegation token change possibility for consumer/producer

2019-03-19 Thread Gabor Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated KAFKA-8128:
-
Affects Version/s: 2.2.0

> Dynamic delegation token change possibility for consumer/producer
> -
>
> Key: KAFKA-8128
> URL: https://issues.apache.org/jira/browse/KAFKA-8128
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Gabor Somogyi
>Priority: Major
>
> Re-authentication feature on broker side is under implementation which will 
> enforce consumer/producer instances to re-authenticate time to time. It would 
> be good to set the latest delegation token dynamically and not re-creating 
> consumer/producer instances.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8128) Dynamic delegation token change possibility for consumer/producer

2019-03-19 Thread Gabor Somogyi (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796147#comment-16796147
 ] 

Gabor Somogyi commented on KAFKA-8128:
--

cc [~viktorsomogyi] maybe interesting for you.

> Dynamic delegation token change possibility for consumer/producer
> -
>
> Key: KAFKA-8128
> URL: https://issues.apache.org/jira/browse/KAFKA-8128
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gabor Somogyi
>Priority: Major
>
> Re-authentication feature on broker side is under implementation which will 
> enforce consumer/producer instances to re-authenticate time to time. It would 
> be good to set the latest delegation token dynamically and not re-creating 
> consumer/producer instances.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8128) Dynamic delegation token change possibility for consumer/producer

2019-03-19 Thread Gabor Somogyi (JIRA)
Gabor Somogyi created KAFKA-8128:


 Summary: Dynamic delegation token change possibility for 
consumer/producer
 Key: KAFKA-8128
 URL: https://issues.apache.org/jira/browse/KAFKA-8128
 Project: Kafka
  Issue Type: Improvement
Reporter: Gabor Somogyi


Re-authentication feature on broker side is under implementation which will 
enforce consumer/producer instances to re-authenticate time to time. It would 
be good to set the latest delegation token dynamically and not re-creating 
consumer/producer instances.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7927) Old command line tools should notify user when unsupported isolation-level parameter provided

2019-02-20 Thread Gabor Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated KAFKA-7927:
-
Summary: Old command line tools should notify user when unsupported 
isolation-level parameter provided  (was: Read committed receives aborted 
events)

> Old command line tools should notify user when unsupported isolation-level 
> parameter provided
> -
>
> Key: KAFKA-7927
> URL: https://issues.apache.org/jira/browse/KAFKA-7927
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core, producer 
>Affects Versions: 1.0.0
>Reporter: Gabor Somogyi
>Priority: Blocker
> Attachments: KafkaProducer.scala, consumer.log, producer.log.gz
>
>
> When a kafka client produces ~30k events and at the end it aborts the 
> transaction a consumer can read part of the aborted messages when 
> "isolation.level" set to "READ_COMMITTED".
> Kafka client version: 2.0.0
> Kafka broker version: 1.0.0
> Producer:
> {code:java}
> java -jar 
> kafka-producer/target/kafka-producer-0.0.1-SNAPSHOT-jar-with-dependencies.jar 
> gsomogyi-cdh5144-220cloudera2-1.gce.cloudera.com:9092 src-topic
> {code}
> See attached code.
> Consumer:
> {code:java}
> kafka-console-consumer --zookeeper localhost:2181 --topic src-topic 
> --from-beginning --isolation-level read_committed
> {code}
> Same behavior seen when re-implemented the consumer in scala.
> The whole application can be found here: 
> https://github.com/gaborgsomogyi/kafka-semantics-tester



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7927) Old command line tools should notify user when unsupported isolation-level parameter provided

2019-02-20 Thread Gabor Somogyi (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16772763#comment-16772763
 ] 

Gabor Somogyi commented on KAFKA-7927:
--

[~guozhang] updated, feel free to adjust...

> Old command line tools should notify user when unsupported isolation-level 
> parameter provided
> -
>
> Key: KAFKA-7927
> URL: https://issues.apache.org/jira/browse/KAFKA-7927
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core, producer 
>Affects Versions: 1.0.0
>Reporter: Gabor Somogyi
>Priority: Blocker
> Attachments: KafkaProducer.scala, consumer.log, producer.log.gz
>
>
> When a kafka client produces ~30k events and at the end it aborts the 
> transaction a consumer can read part of the aborted messages when 
> "isolation.level" set to "READ_COMMITTED".
> Kafka client version: 2.0.0
> Kafka broker version: 1.0.0
> Producer:
> {code:java}
> java -jar 
> kafka-producer/target/kafka-producer-0.0.1-SNAPSHOT-jar-with-dependencies.jar 
> gsomogyi-cdh5144-220cloudera2-1.gce.cloudera.com:9092 src-topic
> {code}
> See attached code.
> Consumer:
> {code:java}
> kafka-console-consumer --zookeeper localhost:2181 --topic src-topic 
> --from-beginning --isolation-level read_committed
> {code}
> Same behavior seen when re-implemented the consumer in scala.
> The whole application can be found here: 
> https://github.com/gaborgsomogyi/kafka-semantics-tester



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7927) Read committed receives aborted events

2019-02-19 Thread Gabor Somogyi (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16771798#comment-16771798
 ] 

Gabor Somogyi commented on KAFKA-7927:
--

I've tested it and works fine. That said a warning/error/whatever would be good 
to notify users.

> Read committed receives aborted events
> --
>
> Key: KAFKA-7927
> URL: https://issues.apache.org/jira/browse/KAFKA-7927
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core, producer 
>Affects Versions: 1.0.0
>Reporter: Gabor Somogyi
>Priority: Blocker
> Attachments: KafkaProducer.scala, consumer.log, producer.log.gz
>
>
> When a kafka client produces ~30k events and at the end it aborts the 
> transaction a consumer can read part of the aborted messages when 
> "isolation.level" set to "READ_COMMITTED".
> Kafka client version: 2.0.0
> Kafka broker version: 1.0.0
> Producer:
> {code:java}
> java -jar 
> kafka-producer/target/kafka-producer-0.0.1-SNAPSHOT-jar-with-dependencies.jar 
> gsomogyi-cdh5144-220cloudera2-1.gce.cloudera.com:9092 src-topic
> {code}
> See attached code.
> Consumer:
> {code:java}
> kafka-console-consumer --zookeeper localhost:2181 --topic src-topic 
> --from-beginning --isolation-level read_committed
> {code}
> Same behavior seen when re-implemented the consumer in scala.
> The whole application can be found here: 
> https://github.com/gaborgsomogyi/kafka-semantics-tester



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7927) Read committed receives aborted events

2019-02-15 Thread Gabor Somogyi (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769059#comment-16769059
 ] 

Gabor Somogyi commented on KAFKA-7927:
--

[~huxi_2b] that's a good point, will retest it...
As a side note most probably I'm not the only one who will try this not 
supported scenario, it worth a warning to give this info.


> Read committed receives aborted events
> --
>
> Key: KAFKA-7927
> URL: https://issues.apache.org/jira/browse/KAFKA-7927
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core, producer 
>Affects Versions: 1.0.0
>Reporter: Gabor Somogyi
>Priority: Blocker
> Attachments: KafkaProducer.scala, consumer.log, producer.log.gz
>
>
> When a kafka client produces ~30k events and at the end it aborts the 
> transaction a consumer can read part of the aborted messages when 
> "isolation.level" set to "READ_COMMITTED".
> Kafka client version: 2.0.0
> Kafka broker version: 1.0.0
> Producer:
> {code:java}
> java -jar 
> kafka-producer/target/kafka-producer-0.0.1-SNAPSHOT-jar-with-dependencies.jar 
> gsomogyi-cdh5144-220cloudera2-1.gce.cloudera.com:9092 src-topic
> {code}
> See attached code.
> Consumer:
> {code:java}
> kafka-console-consumer --zookeeper localhost:2181 --topic src-topic 
> --from-beginning --isolation-level read_committed
> {code}
> Same behavior seen when re-implemented the consumer in scala.
> The whole application can be found here: 
> https://github.com/gaborgsomogyi/kafka-semantics-tester



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7927) Read committed receives aborted events

2019-02-13 Thread Gabor Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated KAFKA-7927:
-
Description: 
When a kafka client produces ~30k events and at the end it aborts the 
transaction a consumer can read part of the aborted messages when 
"isolation.level" set to "READ_COMMITTED".

Kafka client version: 2.0.0
Kafka broker version: 1.0.0
Producer:
{code:java}
java -jar 
kafka-producer/target/kafka-producer-0.0.1-SNAPSHOT-jar-with-dependencies.jar 
gsomogyi-cdh5144-220cloudera2-1.gce.cloudera.com:9092 src-topic
{code}
See attached code.
Consumer:
{code:java}
kafka-console-consumer --zookeeper localhost:2181 --topic src-topic 
--from-beginning --isolation-level read_committed
{code}
Same behavior seen when re-implemented the consumer in scala.

The whole application can be found here: 
https://github.com/gaborgsomogyi/kafka-semantics-tester

  was:
When a kafka client produces ~30k events and at the end it aborts the 
transaction a consumer can read part of the aborted messages when 
"isolation.level" set to "READ_COMMITTED".

Kafka client version: 2.0.0
Kafka broker version: 1.0.0
Producer:
{code:java}
java -jar 
kafka-producer/target/kafka-producer-0.0.1-SNAPSHOT-jar-with-dependencies.jar 
gsomogyi-cdh5144-220cloudera2-1.gce.cloudera.com:9092 src-topic
{code}
See attached code.
Consumer:
{code:java}
kafka-console-consumer --zookeeper localhost:2181 --topic src-topic 
--from-beginning --isolation-level read_committed
{code}
Same behavior seen when re-implemented the consumer in scala.


> Read committed receives aborted events
> --
>
> Key: KAFKA-7927
> URL: https://issues.apache.org/jira/browse/KAFKA-7927
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core, producer 
>Affects Versions: 1.0.0
>Reporter: Gabor Somogyi
>Priority: Blocker
> Attachments: KafkaProducer.scala, consumer.log, producer.log.gz
>
>
> When a kafka client produces ~30k events and at the end it aborts the 
> transaction a consumer can read part of the aborted messages when 
> "isolation.level" set to "READ_COMMITTED".
> Kafka client version: 2.0.0
> Kafka broker version: 1.0.0
> Producer:
> {code:java}
> java -jar 
> kafka-producer/target/kafka-producer-0.0.1-SNAPSHOT-jar-with-dependencies.jar 
> gsomogyi-cdh5144-220cloudera2-1.gce.cloudera.com:9092 src-topic
> {code}
> See attached code.
> Consumer:
> {code:java}
> kafka-console-consumer --zookeeper localhost:2181 --topic src-topic 
> --from-beginning --isolation-level read_committed
> {code}
> Same behavior seen when re-implemented the consumer in scala.
> The whole application can be found here: 
> https://github.com/gaborgsomogyi/kafka-semantics-tester



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7927) Read committed receives aborted events

2019-02-13 Thread Gabor Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated KAFKA-7927:
-
Attachment: consumer.log

> Read committed receives aborted events
> --
>
> Key: KAFKA-7927
> URL: https://issues.apache.org/jira/browse/KAFKA-7927
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core, producer 
>Affects Versions: 1.0.0
>Reporter: Gabor Somogyi
>Priority: Blocker
> Attachments: KafkaProducer.scala, consumer.log, producer.log.gz
>
>
> When a kafka client produces ~30k events and at the end it aborts the 
> transaction a consumer can read part of the aborted messages when 
> "isolation.level" set to "READ_COMMITTED".
> Kafka client version: 2.0.0
> Kafka broker version: 1.0.0
> Producer:
> {code:java}
> java -jar 
> kafka-producer/target/kafka-producer-0.0.1-SNAPSHOT-jar-with-dependencies.jar 
> gsomogyi-cdh5144-220cloudera2-1.gce.cloudera.com:9092 src-topic
> {code}
> See attached code.
> Consumer:
> {code:java}
> kafka-console-consumer --zookeeper localhost:2181 --topic src-topic 
> --from-beginning --isolation-level read_committed
> {code}
> Same behavior seen when re-implemented the consumer in scala.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7927) Read committed receives aborted events

2019-02-13 Thread Gabor Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated KAFKA-7927:
-
Attachment: producer.log.gz

> Read committed receives aborted events
> --
>
> Key: KAFKA-7927
> URL: https://issues.apache.org/jira/browse/KAFKA-7927
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core, producer 
>Affects Versions: 1.0.0
>Reporter: Gabor Somogyi
>Priority: Blocker
> Attachments: KafkaProducer.scala, consumer.log, producer.log.gz
>
>
> When a kafka client produces ~30k events and at the end it aborts the 
> transaction a consumer can read part of the aborted messages when 
> "isolation.level" set to "READ_COMMITTED".
> Kafka client version: 2.0.0
> Kafka broker version: 1.0.0
> Producer:
> {code:java}
> java -jar 
> kafka-producer/target/kafka-producer-0.0.1-SNAPSHOT-jar-with-dependencies.jar 
> gsomogyi-cdh5144-220cloudera2-1.gce.cloudera.com:9092 src-topic
> {code}
> See attached code.
> Consumer:
> {code:java}
> kafka-console-consumer --zookeeper localhost:2181 --topic src-topic 
> --from-beginning --isolation-level read_committed
> {code}
> Same behavior seen when re-implemented the consumer in scala.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7927) Read committed receives aborted events

2019-02-13 Thread Gabor Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated KAFKA-7927:
-
Attachment: KafkaProducer.scala

> Read committed receives aborted events
> --
>
> Key: KAFKA-7927
> URL: https://issues.apache.org/jira/browse/KAFKA-7927
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core, producer 
>Affects Versions: 1.0.0
>Reporter: Gabor Somogyi
>Priority: Blocker
> Attachments: KafkaProducer.scala, consumer.log, producer.log.gz
>
>
> When a kafka client produces ~30k events and at the end it aborts the 
> transaction a consumer can read part of the aborted messages when 
> "isolation.level" set to "READ_COMMITTED".
> Kafka client version: 2.0.0
> Kafka broker version: 1.0.0
> Producer:
> {code:java}
> java -jar 
> kafka-producer/target/kafka-producer-0.0.1-SNAPSHOT-jar-with-dependencies.jar 
> gsomogyi-cdh5144-220cloudera2-1.gce.cloudera.com:9092 src-topic
> {code}
> See attached code.
> Consumer:
> {code:java}
> kafka-console-consumer --zookeeper localhost:2181 --topic src-topic 
> --from-beginning --isolation-level read_committed
> {code}
> Same behavior seen when re-implemented the consumer in scala.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7927) Read committed receives aborted events

2019-02-13 Thread Gabor Somogyi (JIRA)
Gabor Somogyi created KAFKA-7927:


 Summary: Read committed receives aborted events
 Key: KAFKA-7927
 URL: https://issues.apache.org/jira/browse/KAFKA-7927
 Project: Kafka
  Issue Type: Bug
  Components: consumer, core, producer 
Affects Versions: 1.0.0
Reporter: Gabor Somogyi


When a kafka client produces ~30k events and at the end it aborts the 
transaction a consumer can read part of the aborted messages when 
"isolation.level" set to "READ_COMMITTED".

Kafka client version: 2.0.0
Kafka broker version: 1.0.0
Producer:
{code:java}
java -jar 
kafka-producer/target/kafka-producer-0.0.1-SNAPSHOT-jar-with-dependencies.jar 
gsomogyi-cdh5144-220cloudera2-1.gce.cloudera.com:9092 src-topic
{code}
See attached code.
Consumer:
{code:java}
kafka-console-consumer --zookeeper localhost:2181 --topic src-topic 
--from-beginning --isolation-level read_committed
{code}
Same behavior seen when re-implemented the consumer in scala.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7677) Client login with already existing JVM subject

2018-11-27 Thread Gabor Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated KAFKA-7677:
-
Affects Version/s: 2.2.0

> Client login with already existing JVM subject
> --
>
> Key: KAFKA-7677
> URL: https://issues.apache.org/jira/browse/KAFKA-7677
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 2.2.0
>Reporter: Gabor Somogyi
>Priority: Major
>
> If JVM is already logged in to KDC and has a Subject + TGT in it's security 
> context it can be used by clients and not logging in again. Example code:
> {code:java}
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser().doAs(
>   new java.security.PrivilegedExceptionAction[Unit] { 
> override def run(): Unit = {
> val subject = 
> javax.security.auth.Subject.getSubject(java.security.AccessController.getContext())
> val adminClient = AdminClient.create...
>   }
> )
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)