[jira] [Commented] (KAFKA-9146) Add option to force delete members in stream reset tool

2020-01-30 Thread feyman (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17027258#comment-17027258
 ] 

feyman commented on KAFKA-9146:
---

Hi, [~bchen225242], [~mjsax], I have created a PR for this JIRA: 
[https://github.com/apache/kafka/pull/8020], could you kindly help to review? 
Thanks !

> Add option to force delete members in stream reset tool
> ---
>
> Key: KAFKA-9146
> URL: https://issues.apache.org/jira/browse/KAFKA-9146
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, streams
>Reporter: Boyang Chen
>Assignee: feyman
>Priority: Major
>  Labels: newbie
>
> Sometimes people want to reset the stream application sooner, but blocked by 
> the left-over members inside group coordinator, which only expire after 
> session timeout. When user configures a really long session timeout, it could 
> prevent the group from clearing. We should consider adding the support to 
> cleanup members by forcing them to leave the group. To do that, 
>  # If the stream application is already on static membership, we could call 
> directly from adminClient.removeMembersFromGroup
>  # If the application is on dynamic membership, we should modify 
> adminClient.removeMembersFromGroup interface to allow deletion based on 
> member.id.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8503) Implement default.api.timeout.ms for AdminClient

2020-01-30 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8503.

Fix Version/s: 2.5.0
   Resolution: Fixed

> Implement default.api.timeout.ms for AdminClient
> 
>
> Key: KAFKA-8503
> URL: https://issues.apache.org/jira/browse/KAFKA-8503
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: huxihx
>Priority: Major
> Fix For: 2.5.0
>
>
> This issue tracks of the implementation of KIP-533: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-533%3A+Add+default+api+timeout+to+AdminClient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8503) Implement default.api.timeout.ms for AdminClient

2020-01-30 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-8503:
---
Description: This issue tracks of the implementation of KIP-533: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-533%3A+Add+default+api+timeout+to+AdminClient
  (was: The admin client takes a `retries` config similar to the producer. The 
default value is 5. Individual APIs also accept an optional timeout, which is 
defaulted to `request.timeout.ms`. The call will fail if either `retries` or 
the API timeout is exceeded. This is not very intuitive. I think a user would 
expect to wait if they provided a timeout and the operation cannot be 
completed. In general, timeouts are much easier for users to work with and 
reason about.

A couple options are either to ignore `retries` in this case or to increase the 
default value of `retries` to something large and not likely to be exceeded. I 
propose to do the first. Longer term, we could consider deprecating `retries` 
and avoiding the overloading of `request.timeout.ms` by providing a 
`default.api.timeout.ms` similar to the consumer.)

> Implement default.api.timeout.ms for AdminClient
> 
>
> Key: KAFKA-8503
> URL: https://issues.apache.org/jira/browse/KAFKA-8503
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: huxihx
>Priority: Major
>
> This issue tracks of the implementation of KIP-533: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-533%3A+Add+default+api+timeout+to+AdminClient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8503) Implement default.api.timeout.ms for AdminClient

2020-01-30 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-8503:
---
Summary: Implement default.api.timeout.ms for AdminClient  (was: 
AdminClient should ignore retries config if a custom timeout is provided)

> Implement default.api.timeout.ms for AdminClient
> 
>
> Key: KAFKA-8503
> URL: https://issues.apache.org/jira/browse/KAFKA-8503
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: huxihx
>Priority: Major
>
> The admin client takes a `retries` config similar to the producer. The 
> default value is 5. Individual APIs also accept an optional timeout, which is 
> defaulted to `request.timeout.ms`. The call will fail if either `retries` or 
> the API timeout is exceeded. This is not very intuitive. I think a user would 
> expect to wait if they provided a timeout and the operation cannot be 
> completed. In general, timeouts are much easier for users to work with and 
> reason about.
> A couple options are either to ignore `retries` in this case or to increase 
> the default value of `retries` to something large and not likely to be 
> exceeded. I propose to do the first. Longer term, we could consider 
> deprecating `retries` and avoiding the overloading of `request.timeout.ms` by 
> providing a `default.api.timeout.ms` similar to the consumer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8503) AdminClient should ignore retries config if a custom timeout is provided

2020-01-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17027231#comment-17027231
 ] 

ASF GitHub Bot commented on KAFKA-8503:
---

hachikuji commented on pull request #8011: KAFKA-8503; Add default api timeout 
to AdminClient (KIP-533)
URL: https://github.com/apache/kafka/pull/8011
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> AdminClient should ignore retries config if a custom timeout is provided
> 
>
> Key: KAFKA-8503
> URL: https://issues.apache.org/jira/browse/KAFKA-8503
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: huxihx
>Priority: Major
>
> The admin client takes a `retries` config similar to the producer. The 
> default value is 5. Individual APIs also accept an optional timeout, which is 
> defaulted to `request.timeout.ms`. The call will fail if either `retries` or 
> the API timeout is exceeded. This is not very intuitive. I think a user would 
> expect to wait if they provided a timeout and the operation cannot be 
> completed. In general, timeouts are much easier for users to work with and 
> reason about.
> A couple options are either to ignore `retries` in this case or to increase 
> the default value of `retries` to something large and not likely to be 
> exceeded. I propose to do the first. Longer term, we could consider 
> deprecating `retries` and avoiding the overloading of `request.timeout.ms` by 
> providing a `default.api.timeout.ms` similar to the consumer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9403) Migrate BaseConsumer to Consumer

2020-01-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17027186#comment-17027186
 ] 

ASF GitHub Bot commented on KAFKA-9403:
---

highluck commented on pull request #7935: KAFKA-9403: Remove BaseConsumer from 
Mirrormake
URL: https://github.com/apache/kafka/pull/7935
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Migrate BaseConsumer to Consumer
> 
>
> Key: KAFKA-9403
> URL: https://issues.apache.org/jira/browse/KAFKA-9403
> Project: Kafka
>  Issue Type: Improvement
>Reporter: highluck
>Assignee: highluck
>Priority: Minor
>
> BaseConsumerRecord is deprecated
> but MirrorMaker using BaseConsumerRecord
>  
> Remove BaseConsumer from Mirrormaker



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9403) Migrate BaseConsumer to Consumer

2020-01-30 Thread highluck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

highluck resolved KAFKA-9403.
-
Resolution: Not A Bug

> Migrate BaseConsumer to Consumer
> 
>
> Key: KAFKA-9403
> URL: https://issues.apache.org/jira/browse/KAFKA-9403
> Project: Kafka
>  Issue Type: Improvement
>Reporter: highluck
>Assignee: highluck
>Priority: Minor
>
> BaseConsumerRecord is deprecated
> but MirrorMaker using BaseConsumerRecord
>  
> Remove BaseConsumer from Mirrormaker



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9483) Add Scala KStream#toTable to the Streams DSL

2020-01-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17027185#comment-17027185
 ] 

ASF GitHub Bot commented on KAFKA-9483:
---

highluck commented on pull request #8024: KAFKA-9483; Add Scala KStream#toTable 
to the Streams DSL
URL: https://github.com/apache/kafka/pull/8024
 
 
   Add Scala KStream#toTable to the Streams DSL
   https://issues.apache.org/jira/browse/KAFKA-9483
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Scala KStream#toTable to the Streams DSL
> 
>
> Key: KAFKA-9483
> URL: https://issues.apache.org/jira/browse/KAFKA-9483
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: highluck
>Assignee: highluck
>Priority: Minor
>
> [KIP-523: Add KStream#toTable to the Streams 
> DSL|https://cwiki.apache.org/confluence/display/KAFKA/KIP-523%3A+Add+KStream%23toTable+to+the+Streams+DSL?src=jira]
>  
> I am trying to add the same function to scalar



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (KAFKA-8507) Support --bootstrap-server in all command line tools

2020-01-30 Thread Mitchell (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mitchell updated KAFKA-8507:

Comment: was deleted

(was: [https://github.com/apache/kafka/pull/8023])

> Support --bootstrap-server in all command line tools
> 
>
> Key: KAFKA-8507
> URL: https://issues.apache.org/jira/browse/KAFKA-8507
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.5.0
>Reporter: Jason Gustafson
>Assignee: Mitchell
>Priority: Major
>  Labels: pull-request-available
>
> This is a unambitious initial move toward standardizing the command line 
> tools. We have favored the name {{\-\-bootstrap-server}} in all new tools 
> since it matches the config {{bootstrap.server}} which is used by all 
> clients. Some older commands use {{\-\-broker-list}} or 
> {{\-\-bootstrap-servers}} and maybe other exotic variations. We should 
> support {{\-\-bootstrap-server}} in all commands and deprecate the other 
> options.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (KAFKA-8507) Support --bootstrap-server in all command line tools

2020-01-30 Thread Mitchell (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mitchell updated KAFKA-8507:

Comment: was deleted

(was: [https://github.com/apache/kafka/pull/8023] submitted)

> Support --bootstrap-server in all command line tools
> 
>
> Key: KAFKA-8507
> URL: https://issues.apache.org/jira/browse/KAFKA-8507
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.5.0
>Reporter: Jason Gustafson
>Assignee: Mitchell
>Priority: Major
>  Labels: pull-request-available
>
> This is a unambitious initial move toward standardizing the command line 
> tools. We have favored the name {{\-\-bootstrap-server}} in all new tools 
> since it matches the config {{bootstrap.server}} which is used by all 
> clients. Some older commands use {{\-\-broker-list}} or 
> {{\-\-bootstrap-servers}} and maybe other exotic variations. We should 
> support {{\-\-bootstrap-server}} in all commands and deprecate the other 
> options.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8507) Support --bootstrap-server in all command line tools

2020-01-30 Thread Mitchell (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mitchell updated KAFKA-8507:

Labels: pull-request-available  (was: needs-kip)

> Support --bootstrap-server in all command line tools
> 
>
> Key: KAFKA-8507
> URL: https://issues.apache.org/jira/browse/KAFKA-8507
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Jason Gustafson
>Assignee: Mitchell
>Priority: Major
>  Labels: pull-request-available
>
> This is a unambitious initial move toward standardizing the command line 
> tools. We have favored the name {{\-\-bootstrap-server}} in all new tools 
> since it matches the config {{bootstrap.server}} which is used by all 
> clients. Some older commands use {{\-\-broker-list}} or 
> {{\-\-bootstrap-servers}} and maybe other exotic variations. We should 
> support {{\-\-bootstrap-server}} in all commands and deprecate the other 
> options.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8507) Support --bootstrap-server in all command line tools

2020-01-30 Thread Mitchell (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17027173#comment-17027173
 ] 

Mitchell commented on KAFKA-8507:
-

[https://github.com/apache/kafka/pull/8023] submitted

> Support --bootstrap-server in all command line tools
> 
>
> Key: KAFKA-8507
> URL: https://issues.apache.org/jira/browse/KAFKA-8507
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Jason Gustafson
>Assignee: Mitchell
>Priority: Major
>  Labels: needs-kip
>
> This is a unambitious initial move toward standardizing the command line 
> tools. We have favored the name {{\-\-bootstrap-server}} in all new tools 
> since it matches the config {{bootstrap.server}} which is used by all 
> clients. Some older commands use {{\-\-broker-list}} or 
> {{\-\-bootstrap-servers}} and maybe other exotic variations. We should 
> support {{\-\-bootstrap-server}} in all commands and deprecate the other 
> options.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8507) Support --bootstrap-server in all command line tools

2020-01-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17027171#comment-17027171
 ] 

ASF GitHub Bot commented on KAFKA-8507:
---

mitchell-h commented on pull request #8023: KAFKA-8507 kip 499 Unify connection 
name flag for command line tool
URL: https://github.com/apache/kafka/pull/8023
 
 
   This is a change updates ConsoleProducer, ConsumerPerformance, 
VerifiableProducer, and VerifiableConsumer classes to add and prefer the 
--bootstrap-server flag for defining the connection point of the Kafka cluster.
   
   
   ### Committer Checklist (excluded from commit message)
   - [x ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support --bootstrap-server in all command line tools
> 
>
> Key: KAFKA-8507
> URL: https://issues.apache.org/jira/browse/KAFKA-8507
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Jason Gustafson
>Assignee: Mitchell
>Priority: Major
>  Labels: needs-kip
>
> This is a unambitious initial move toward standardizing the command line 
> tools. We have favored the name {{\-\-bootstrap-server}} in all new tools 
> since it matches the config {{bootstrap.server}} which is used by all 
> clients. Some older commands use {{\-\-broker-list}} or 
> {{\-\-bootstrap-servers}} and maybe other exotic variations. We should 
> support {{\-\-bootstrap-server}} in all commands and deprecate the other 
> options.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-8507) Support --bootstrap-server in all command line tools

2020-01-30 Thread Mitchell (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mitchell reassigned KAFKA-8507:
---

Assignee: Mitchell

> Support --bootstrap-server in all command line tools
> 
>
> Key: KAFKA-8507
> URL: https://issues.apache.org/jira/browse/KAFKA-8507
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Jason Gustafson
>Assignee: Mitchell
>Priority: Major
>  Labels: needs-kip
>
> This is a unambitious initial move toward standardizing the command line 
> tools. We have favored the name {{\-\-bootstrap-server}} in all new tools 
> since it matches the config {{bootstrap.server}} which is used by all 
> clients. Some older commands use {{\-\-broker-list}} or 
> {{\-\-bootstrap-servers}} and maybe other exotic variations. We should 
> support {{\-\-bootstrap-server}} in all commands and deprecate the other 
> options.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9486) Kafka Security

2020-01-30 Thread Kuttaiah (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuttaiah updated KAFKA-9486:

Description: 
My use case is to setup different protocol for inter-broker communication and 
producer/consumer to broker communication.

 

Hence I have below  broker configuration 

 
{quote}{{"zookeeper.sasl.enabled": false}}

{{ # Disable hostname verification, default is https.
 "ssl.endpoint.identification.algorithm":
 "inter.broker.listener.name": PLAINTEXT
 "listener.name.external.sasl.enabled.mechanisms": OAUTHBEARER
 "listener.name.external.oauthbearer.sasl.login.callback.handler.class": 
oracle.insight.common.kafka.security.OAuthBearerSignedLoginCallbackHandler
 "listener.name.external.oauthbearer.sasl.server.callback.handler.class": 
oracle.insight.common.kafka.security.OAuthBearerSignedValidatorCallbackHandler
 "listener.security.protocol.map": PLAINTEXT:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
 "listener.name.external.oauthbearer.sasl.jaas.config": 
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required 
signedLoginStringClaim_ocid=insightAdmin 
signedLoginKeyServiceClass=oracle.insight.common.security.SMSKeyService 
signedValidatorKeyServiceClass=oracle.insight.common.security.SMSKeyService;
 "advertised.listeners": 
EXTERNAL://kafka-$((${KAFKA_BROKER_ID})).mydomain:$((${KAFKA_OUTSIDE_PORT} + 
${KAFKA_BROKER_ID}))}}
{quote}
With this i always get 

 
{quote}{{[2020-01-30 17:23:55,228] INFO [SocketServer brokerId=0] Failed 
authentication with /10.244.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.) (org.apache.kafka.common.network.Selector)
 [2020-01-30 17:23:55,633] INFO [SocketServer brokerId=0] Failed authentication 
with /10.244.0.1 (Unexpected Kafka request of type METADATA during SASL 
handshake.) (org.apache.kafka.common.network.Selector)
 [2020-01-30 17:23:55,989] INFO [SocketServer brokerId=0] Failed authentication 
with /10.244.0.1 (Unexpected Kafka request of type METADATA during SASL 
handshake.) (org.apache.kafka.common.network.Selector)}}
{quote}
 

>From the logs it looks like  inter-broker communication is happening via SASL 
>even though I set it to PLAIN_TEXT
{quote}{{"inter.broker.listener.name": PLAINTEXT}}
{quote}
Please guide me on what needs to be done to resolve this issue. Am i using 
right set of configuration or any config is missing?

{{thanks}}

{{Robin Kuttaiah}}

  was:
My use case is to setup different protocol for inter-broker communication and 
producer/consumer to broker communication.

 

Hence I have below  broker configuration 

 
{quote}{{"zookeeper.sasl.enabled": false}}

{{ # Disable hostname verification, default is https.
 "ssl.endpoint.identification.algorithm":
 "inter.broker.listener.name": PLAINTEXT
 "listener.name.external.sasl.enabled.mechanisms": OAUTHBEARER
 "listener.name.external.oauthbearer.sasl.login.callback.handler.class": 
oracle.insight.common.kafka.security.OAuthBearerSignedLoginCallbackHandler
 "listener.name.external.oauthbearer.sasl.server.callback.handler.class": 
oracle.insight.common.kafka.security.OAuthBearerSignedValidatorCallbackHandler
 "listener.security.protocol.map": PLAINTEXT:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
 "listener.name.external.oauthbearer.sasl.jaas.config": 
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required 
signedLoginStringClaim_ocid=insightAdmin 
signedLoginKeyServiceClass=oracle.insight.common.security.SMSKeyService 
signedValidatorKeyServiceClass=oracle.insight.common.security.SMSKeyService;
 "advertised.listeners": 
EXTERNAL://kafka-$((${KAFKA_BROKER_ID})).mydomain:$((${KAFKA_OUTSIDE_PORT} + 
${KAFKA_BROKER_ID}))}}

{{}}
{quote}
With this i always get 

 
{quote}{{[2020-01-30 17:23:55,228] INFO [SocketServer brokerId=0] Failed 
authentication with /10.244.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.) (org.apache.kafka.common.network.Selector)
 [2020-01-30 17:23:55,633] INFO [SocketServer brokerId=0] Failed authentication 
with /10.244.0.1 (Unexpected Kafka request of type METADATA during SASL 
handshake.) (org.apache.kafka.common.network.Selector)
 [2020-01-30 17:23:55,989] INFO [SocketServer brokerId=0] Failed authentication 
with /10.244.0.1 (Unexpected Kafka request of type METADATA during SASL 
handshake.) (org.apache.kafka.common.network.Selector)}}
{quote}
{{}}

>From the logs it looks like  inter-broker communication is happening via SASL 
>even though I set it to PLAIN_TEXT
{quote}{{"inter.broker.listener.name": PLAINTEXT}}

{{}}
{quote}
{{Please guide me on what exactly is missing. This is critical for our release 
which is happening shortly.}}

{{}}

{{thanks}}

{{Robin Kuttaiah}}


> Kafka Security
> --
>
> Key: KAFKA-9486
> URL: https://issues.apache.org/jira/browse/KAFKA-9486
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>   

[jira] [Updated] (KAFKA-9486) Kafka Security

2020-01-30 Thread Kuttaiah (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuttaiah updated KAFKA-9486:

Description: 
My use case is to setup different protocol for inter-broker communication and 
producer/consumer to broker communication.

 

Hence I have below  broker configuration 

 
{quote}{{"zookeeper.sasl.enabled": false}}

{{ # Disable hostname verification, default is https.
 "ssl.endpoint.identification.algorithm":
 "inter.broker.listener.name": PLAINTEXT
 "listener.name.external.sasl.enabled.mechanisms": OAUTHBEARER
 "listener.name.external.oauthbearer.sasl.login.callback.handler.class": 
oracle.insight.common.kafka.security.OAuthBearerSignedLoginCallbackHandler
 "listener.name.external.oauthbearer.sasl.server.callback.handler.class": 
oracle.insight.common.kafka.security.OAuthBearerSignedValidatorCallbackHandler
 "listener.security.protocol.map": PLAINTEXT:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
 "listener.name.external.oauthbearer.sasl.jaas.config": 
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required 
signedLoginStringClaim_ocid=insightAdmin 
signedLoginKeyServiceClass=oracle.insight.common.security.SMSKeyService 
signedValidatorKeyServiceClass=oracle.insight.common.security.SMSKeyService;
 "advertised.listeners": 
EXTERNAL://kafka-$((${KAFKA_BROKER_ID})).mydomain:$((${KAFKA_OUTSIDE_PORT} + 
${KAFKA_BROKER_ID}))}}

{{}}
{quote}
With this i always get 

 
{quote}{{[2020-01-30 17:23:55,228] INFO [SocketServer brokerId=0] Failed 
authentication with /10.244.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.) (org.apache.kafka.common.network.Selector)
 [2020-01-30 17:23:55,633] INFO [SocketServer brokerId=0] Failed authentication 
with /10.244.0.1 (Unexpected Kafka request of type METADATA during SASL 
handshake.) (org.apache.kafka.common.network.Selector)
 [2020-01-30 17:23:55,989] INFO [SocketServer brokerId=0] Failed authentication 
with /10.244.0.1 (Unexpected Kafka request of type METADATA during SASL 
handshake.) (org.apache.kafka.common.network.Selector)}}
{quote}
{{}}

>From the logs it looks like  inter-broker communication is happening via SASL 
>even though I set it to PLAIN_TEXT
{quote}{{"inter.broker.listener.name": PLAINTEXT}}

{{}}
{quote}
{{Please guide me on what exactly is missing. This is critical for our release 
which is happening shortly.}}

{{}}

{{thanks}}

{{Robin Kuttaiah}}

  was:
My use case is to setup different protocol for inter-broker communication and 
producer/consumer to broker communication.

 

Hence I have below configuration 

 
{quote}{{"zookeeper.sasl.enabled": false}}

{{  # Disable hostname verification, default is https.
  "ssl.endpoint.identification.algorithm":
  "inter.broker.listener.name": PLAINTEXT
  "listener.name.external.sasl.enabled.mechanisms": OAUTHBEARER
  "listener.name.external.oauthbearer.sasl.login.callback.handler.class": 
oracle.insight.common.kafka.security.OAuthBearerSignedLoginCallbackHandler
  "listener.name.external.oauthbearer.sasl.server.callback.handler.class": 
oracle.insight.common.kafka.security.OAuthBearerSignedValidatorCallbackHandler
  "listener.security.protocol.map": PLAINTEXT:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
  "listener.name.external.oauthbearer.sasl.jaas.config": 
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required 
signedLoginStringClaim_ocid=insightAdmin 
signedLoginKeyServiceClass=oracle.insight.common.security.SMSKeyService 
signedValidatorKeyServiceClass=oracle.insight.common.security.SMSKeyService;
  "advertised.listeners": 
EXTERNAL://kafka-$((${KAFKA_BROKER_ID})).mydomain:$((${KAFKA_OUTSIDE_PORT} + 
${KAFKA_BROKER_ID}))}}

{{}}
{quote}
With this i always get 

 
{quote}{{[2020-01-30 17:23:55,228] INFO [SocketServer brokerId=0] Failed 
authentication with /10.244.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2020-01-30 17:23:55,633] INFO [SocketServer brokerId=0] Failed authentication 
with /10.244.0.1 (Unexpected Kafka request of type METADATA during SASL 
handshake.) (org.apache.kafka.common.network.Selector)
[2020-01-30 17:23:55,989] INFO [SocketServer brokerId=0] Failed authentication 
with /10.244.0.1 (Unexpected Kafka request of type METADATA during SASL 
handshake.) (org.apache.kafka.common.network.Selector)}}
{quote}
{{}}

>From the logs it looks like  inter-broker communication is happening via SASL 
>even though I set it to PLAIN_TEXT
{quote}{{"inter.broker.listener.name": PLAINTEXT}}

{{}}
{quote}
{{Please guide me on what exactly is missing. This is critical for our release 
which is happening shortly.}}

{{}}

{{thanks}}

{{Robin Kuttaiah}}


> Kafka Security
> --
>
> Key: KAFKA-9486
> URL: https://issues.apache.org/jira/browse/KAFKA-9486
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>  

[jira] [Created] (KAFKA-9486) Kafka Security

2020-01-30 Thread Kuttaiah (Jira)
Kuttaiah created KAFKA-9486:
---

 Summary: Kafka Security
 Key: KAFKA-9486
 URL: https://issues.apache.org/jira/browse/KAFKA-9486
 Project: Kafka
  Issue Type: Bug
  Components: security
Reporter: Kuttaiah


My use case is to setup different protocol for inter-broker communication and 
producer/consumer to broker communication.

 

Hence I have below configuration 

 
{quote}{{"zookeeper.sasl.enabled": false}}

{{  # Disable hostname verification, default is https.
  "ssl.endpoint.identification.algorithm":
  "inter.broker.listener.name": PLAINTEXT
  "listener.name.external.sasl.enabled.mechanisms": OAUTHBEARER
  "listener.name.external.oauthbearer.sasl.login.callback.handler.class": 
oracle.insight.common.kafka.security.OAuthBearerSignedLoginCallbackHandler
  "listener.name.external.oauthbearer.sasl.server.callback.handler.class": 
oracle.insight.common.kafka.security.OAuthBearerSignedValidatorCallbackHandler
  "listener.security.protocol.map": PLAINTEXT:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
  "listener.name.external.oauthbearer.sasl.jaas.config": 
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required 
signedLoginStringClaim_ocid=insightAdmin 
signedLoginKeyServiceClass=oracle.insight.common.security.SMSKeyService 
signedValidatorKeyServiceClass=oracle.insight.common.security.SMSKeyService;
  "advertised.listeners": 
EXTERNAL://kafka-$((${KAFKA_BROKER_ID})).mydomain:$((${KAFKA_OUTSIDE_PORT} + 
${KAFKA_BROKER_ID}))}}

{{}}
{quote}
With this i always get 

 
{quote}{{[2020-01-30 17:23:55,228] INFO [SocketServer brokerId=0] Failed 
authentication with /10.244.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2020-01-30 17:23:55,633] INFO [SocketServer brokerId=0] Failed authentication 
with /10.244.0.1 (Unexpected Kafka request of type METADATA during SASL 
handshake.) (org.apache.kafka.common.network.Selector)
[2020-01-30 17:23:55,989] INFO [SocketServer brokerId=0] Failed authentication 
with /10.244.0.1 (Unexpected Kafka request of type METADATA during SASL 
handshake.) (org.apache.kafka.common.network.Selector)}}
{quote}
{{}}

>From the logs it looks like  inter-broker communication is happening via SASL 
>even though I set it to PLAIN_TEXT
{quote}{{"inter.broker.listener.name": PLAINTEXT}}

{{}}
{quote}
{{Please guide me on what exactly is missing. This is critical for our release 
which is happening shortly.}}

{{}}

{{thanks}}

{{Robin Kuttaiah}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9485) Dynamic updates to num.recovery.threads.per.data.dir are not applied right away

2020-01-30 Thread Michael Bingham (Jira)
Michael Bingham created KAFKA-9485:
--

 Summary: Dynamic updates to num.recovery.threads.per.data.dir are 
not applied right away
 Key: KAFKA-9485
 URL: https://issues.apache.org/jira/browse/KAFKA-9485
 Project: Kafka
  Issue Type: Improvement
  Components: core, log
Affects Versions: 2.4.0
Reporter: Michael Bingham


The {{num.recovery.threads.per.data.dir}} broker property is a {{cluster-wide}} 
dynamically configurable setting, but it does not appear that it would have any 
dynamic effect on actual broker behavior.

The recovery thread pool is currently only created once when the {{LogManager}} 
is started and the {{loadLogs()}} method is called. If this property is later 
changed dynamically, it would have no effect until the broker is restarted.

This might be confusing to someone modifying this property, so perhaps should 
be made more clear in the documentation, or perhaps changed to a \{{read-only}} 
property. The only benefit I see to having it be a dynamic config property is 
that it can be applied once for the entire cluster, instead of individually 
specified in each broker's {{server.properties}} file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8503) AdminClient should ignore retries config if a custom timeout is provided

2020-01-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17027075#comment-17027075
 ] 

ASF GitHub Bot commented on KAFKA-8503:
---

hachikuji commented on pull request #6913: KAFKA-8503: Ignore retries config if 
a custom timeout is provided
URL: https://github.com/apache/kafka/pull/6913
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> AdminClient should ignore retries config if a custom timeout is provided
> 
>
> Key: KAFKA-8503
> URL: https://issues.apache.org/jira/browse/KAFKA-8503
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: huxihx
>Priority: Major
>
> The admin client takes a `retries` config similar to the producer. The 
> default value is 5. Individual APIs also accept an optional timeout, which is 
> defaulted to `request.timeout.ms`. The call will fail if either `retries` or 
> the API timeout is exceeded. This is not very intuitive. I think a user would 
> expect to wait if they provided a timeout and the operation cannot be 
> completed. In general, timeouts are much easier for users to work with and 
> reason about.
> A couple options are either to ignore `retries` in this case or to increase 
> the default value of `retries` to something large and not likely to be 
> exceeded. I propose to do the first. Longer term, we could consider 
> deprecating `retries` and avoiding the overloading of `request.timeout.ms` by 
> providing a `default.api.timeout.ms` similar to the consumer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-8382) Add TimestampedSessionStore

2020-01-30 Thread highluck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17027072#comment-17027072
 ] 

highluck edited comment on KAFKA-8382 at 1/30/20 11:22 PM:
---

[~mjsax]

Can I ask you to review it?! thank you!!

 

[https://github.com/apache/kafka/pull/8022]


was (Author: high.lee):
[https://github.com/apache/kafka/pull/8022]

> Add TimestampedSessionStore
> ---
>
> Key: KAFKA-8382
> URL: https://issues.apache.org/jira/browse/KAFKA-8382
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: highluck
>Priority: Minor
>  Labels: kip
>
> Follow up to KIP-258, to complete the KIP by adding TimestampedSessionStores.
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-258%3A+Allow+to+Store+Record+Timestamps+in+RocksDB]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8382) Add TimestampedSessionStore

2020-01-30 Thread highluck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17027072#comment-17027072
 ] 

highluck commented on KAFKA-8382:
-

[https://github.com/apache/kafka/pull/8022]

> Add TimestampedSessionStore
> ---
>
> Key: KAFKA-8382
> URL: https://issues.apache.org/jira/browse/KAFKA-8382
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: highluck
>Priority: Minor
>  Labels: kip
>
> Follow up to KIP-258, to complete the KIP by adding TimestampedSessionStores.
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-258%3A+Allow+to+Store+Record+Timestamps+in+RocksDB]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (KAFKA-8382) Add TimestampedSessionStore

2020-01-30 Thread highluck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

highluck updated KAFKA-8382:

Comment: was deleted

(was: [https://github.com/apache/kafka/pull/8022])

> Add TimestampedSessionStore
> ---
>
> Key: KAFKA-8382
> URL: https://issues.apache.org/jira/browse/KAFKA-8382
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: highluck
>Priority: Minor
>  Labels: kip
>
> Follow up to KIP-258, to complete the KIP by adding TimestampedSessionStores.
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-258%3A+Allow+to+Store+Record+Timestamps+in+RocksDB]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9484) Unnecessary LeaderAndIsr update following reassignment completion

2020-01-30 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-9484:
--

 Summary: Unnecessary LeaderAndIsr update following reassignment 
completion
 Key: KAFKA-9484
 URL: https://issues.apache.org/jira/browse/KAFKA-9484
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


Following the completion of the reassignment, the controller executes two 
steps: first, it elects a new leader (if needed) and sends a LeaderAndIsr 
update (in any case) with the new target replica set; second, it removes 
unneeded replicas from the replica set and sends another round of LeaderAndIsr 
updates. I am doubting the need for the first round of updates in the case that 
the leader doesn't needed changing. 

For example, suppose we have the following reassignment state: 

replicas=[1,2,3,4], adding=[4], removing=[1], isr=[1,2,3,4], leader=2, epoch=10

First the controller will bump the epoch with the target replica set, which 
will result in a round of to the target replica set with the following state: 

replicas=[2,3,4], adding=[], removing=[], isr=[1,2,3,4], leader=2, epoch=11 

Immediately following this, the controller will bump the epoch again and remove 
the unneeded replica. This will result in another round of LeaderAndIsr 
requests with the following state: 

replicas=[2,3,4], adding=[], removing=[], isr=[1,2,3], leader=2, epoch=12 

The first round of LeaderAndIsr updates puzzles me a bit. It is justified in 
the code with this comment: 

{code} 
B3. Send a LeaderAndIsr request with RS = TRS. This will prevent the leader 
from adding any replica in TRS - ORS back in the isr. 
{code} 
(I think the comment is backwards. It should be ORS (original replica set) - 
TRS (target replica set).) 

It sounds like we are trying to prevent a member of ORS from being added back 
to the ISR, but even if it did get added, it would be removed in the next step 
anyway. In the uncommon case that an ORS replica is out of sync, there does not 
seem to be any benefit to this first update since it is basically paying the 
cost of one write in order to save the speculative cost of one write. 
Additionally, it would be useful if the protocol could enforce the invariant 
that the ISR is always a subset of the replica set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9484) Unnecessary LeaderAndIsr update following reassignment completion

2020-01-30 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-9484:
--

Assignee: Jason Gustafson

> Unnecessary LeaderAndIsr update following reassignment completion
> -
>
> Key: KAFKA-9484
> URL: https://issues.apache.org/jira/browse/KAFKA-9484
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> Following the completion of the reassignment, the controller executes two 
> steps: first, it elects a new leader (if needed) and sends a LeaderAndIsr 
> update (in any case) with the new target replica set; second, it removes 
> unneeded replicas from the replica set and sends another round of 
> LeaderAndIsr updates. I am doubting the need for the first round of updates 
> in the case that the leader doesn't needed changing. 
> For example, suppose we have the following reassignment state: 
> replicas=[1,2,3,4], adding=[4], removing=[1], isr=[1,2,3,4], leader=2, 
> epoch=10
> First the controller will bump the epoch with the target replica set, which 
> will result in a round of to the target replica set with the following state: 
> replicas=[2,3,4], adding=[], removing=[], isr=[1,2,3,4], leader=2, epoch=11 
> Immediately following this, the controller will bump the epoch again and 
> remove the unneeded replica. This will result in another round of 
> LeaderAndIsr requests with the following state: 
> replicas=[2,3,4], adding=[], removing=[], isr=[1,2,3], leader=2, epoch=12 
> The first round of LeaderAndIsr updates puzzles me a bit. It is justified in 
> the code with this comment: 
> {code} 
> B3. Send a LeaderAndIsr request with RS = TRS. This will prevent the leader 
> from adding any replica in TRS - ORS back in the isr. 
> {code} 
> (I think the comment is backwards. It should be ORS (original replica set) - 
> TRS (target replica set).) 
> It sounds like we are trying to prevent a member of ORS from being added back 
> to the ISR, but even if it did get added, it would be removed in the next 
> step anyway. In the uncommon case that an ORS replica is out of sync, there 
> does not seem to be any benefit to this first update since it is basically 
> paying the cost of one write in order to save the speculative cost of one 
> write. Additionally, it would be useful if the protocol could enforce the 
> invariant that the ISR is always a subset of the replica set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8382) Add TimestampedSessionStore

2020-01-30 Thread highluck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17027069#comment-17027069
 ] 

highluck commented on KAFKA-8382:
-

[https://github.com/apache/kafka/pull/8022]

> Add TimestampedSessionStore
> ---
>
> Key: KAFKA-8382
> URL: https://issues.apache.org/jira/browse/KAFKA-8382
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: highluck
>Priority: Minor
>  Labels: kip
>
> Follow up to KIP-258, to complete the KIP by adding TimestampedSessionStores.
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-258%3A+Allow+to+Store+Record+Timestamps+in+RocksDB]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9479) Describe consumer group --all-groups shows header for each entry

2020-01-30 Thread Vetle Leinonen-Roeim (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17027007#comment-17027007
 ] 

Vetle Leinonen-Roeim commented on KAFKA-9479:
-

I created a patch for this to see how it was to get a development environment 
for Kafka up and running, but I see it's already assigned. Let me know if you'd 
like me to submit it as a PR.

> Describe consumer group --all-groups shows header for each entry
> 
>
> Key: KAFKA-9479
> URL: https://issues.apache.org/jira/browse/KAFKA-9479
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jeff Kim
>Priority: Major
>  Labels: newbie
>
> When using `bin/kafka-consumer-groups.sh --describe --state --all-groups`, we 
> print output like the following:
> {code}
> GROUP  COORDINATOR (ID)  
> ASSIGNMENT-STRATEGY  STATE   #MEMBERS
> group1 localhost:9092 (3) rangeStable  1  
>   
>
> GROUP  COORDINATOR (ID)  
> ASSIGNMENT-STRATEGY  STATE   #MEMBERS
> group2  localhost:9092 (3) rangeStable  1 
>   
>  
> {code}
> It would be nice if we did not show the header for every entry.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7447) Consumer offsets lost during leadership rebalance after bringing node back from clean shutdown

2020-01-30 Thread Lauren McDonald (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17026987#comment-17026987
 ] 

Lauren McDonald commented on KAFKA-7447:


We have also encountered this issue multiple times over the past few months in 
two different (6-node) production clusters. I believe it is triggered by 
"general infrastructure instability" (network, server, etc). We notice similar 
behavior above where it seems like brokers are arguing about who is the 
controller for a while, then 5 min later one of the consumer groups resets. I 
wanted to chime in about the specific reset though. I can tell it happens in 
our logs when a consumer group's generation goes from 1000+ to 0. 

We also do see the error mentioned above about "Error loading offsets from 
"
{code:java}
December 8th 2019, 19:03:32.181[GroupCoordinator 6]: Loading group metadata for 
GROUP12345 with generation 2144

December 8th 2019, 19:03:38.807[GroupCoordinator 3]: Member 
consumer-3-GROUP12345 in group GROUP12345 has failed, removing it from the group

December 8th 2019, 19:03:46.857[GroupCoordinator 3]: Member 
consumer-1-GROUP12345 in group GROUP12345 has failed, removing it from the group

December 8th 2019, 19:06:26.930[GroupCoordinator 6]: Unloading group metadata 
for GROUP12345 with generation 2144

December 8th 2019, 19:06:31.126[GroupMetadataManager brokerId=3] Error loading 
offsets from __consumer_offsets-46

December 8th 2019, 19:06:31.141[GroupCoordinator 3]: Preparing to rebalance 
group GROUP12345 in state PreparingRebalance with old generation 0 
(__consumer_offsets-46) (reason: Adding new member consumer-3-GROUP12345 with 
group instanceid None)

December 8th 2019, 19:06:35.051[GroupCoordinator 6]: Member 
consumer-3-GROUP12345 in group GROUP12345e has failed, removing it from the 
group

December 8th 2019, 19:06:35.525[GroupCoordinator 6]: Member 
consumer-1-GROUP12345 in group GROUP12345 has failed, removing it from the 
group{code}
 

This has affected 4-5 of our consumer groups over the past 60 days. If the 
consumer has their config to "earliest", they start at the beginning of the log 
again and reprocess, which is bad. However, if they have their config set as 
latest, and it resets, they potentially missed messages between the last 
committed offset and when they retrieved latest after reseting the consumer 
group, which arguably can be worse. 

We are on 2.3.x version of Kafka, and have auto rebalance set to true. I do 
notice the rebalancing happening during this time (the controller is telling 
brokers to become leader / follower on partitions). One suggestion (aside from 
upgrading which we'll do as well) is the following: if it is detected that the 
leadership of partitions should be rebalanced (if the ratio is higher than the 
setting configured), to wait a configurable amount of time before triggering 
the rebalance. This would prevent the case above where if there was currently 
drama, and it was time to check rebalance, it wouldn't add more complexity into 
an already unstable cluster, but wait a reasonable amount of time for it to 
chill out. We may turn off auto rebalance and implement this locally for now.

 

We do not run Kafka on the same hosts as Zookeeper. 

> Consumer offsets lost during leadership rebalance after bringing node back 
> from clean shutdown
> --
>
> Key: KAFKA-7447
> URL: https://issues.apache.org/jira/browse/KAFKA-7447
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.1, 2.0.0
>Reporter: Ben Isaacs
>Priority: Major
>
> *Summary:*
>  * When 1 of my 3 brokers is cleanly shut down, consumption and production 
> continues as normal due to replication. (Consumers are rebalanced to the 
> replicas, and producers are rebalanced to the remaining brokers). However, 
> when the cleanly-shut-down broker comes back, after about 10 minutes, a 
> flurry of production errors occur and my consumers suddenly go back in time 2 
> weeks, causing a long outage (12 hours+) as all messages are replayed on some 
> topics.
>  * The hypothesis is that the auto-leadership-rebalance is happening too 
> quickly after the downed broker returns, before it has had a chance to become 
> fully synchronised on all partitions. In particular, it seems that having 
> consumer offets ahead of the most recent data on the topic that consumer was 
> following causes the consumer to be reset to 0.
> *Expected:*
>  * bringing a node back from a clean shut down does not cause any consumers 
> to reset to 0.
> *Actual:*
>  * I experience approximately 12 hours of partial outage triggered at the 
> point that auto leadership rebalance occurs, after a cleanly shut down node 
> returns.
> *Workaround:*
>  * disable auto leadership rebalance entirely.

[jira] [Commented] (KAFKA-9441) Refactor commit logic

2020-01-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17026935#comment-17026935
 ] 

ASF GitHub Bot commented on KAFKA-9441:
---

mjsax commented on pull request #7977: KAFKA-9441: Refactor Kafka Streams 
commit logic
URL: https://github.com/apache/kafka/pull/7977
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Refactor commit logic
> -
>
> Key: KAFKA-9441
> URL: https://issues.apache.org/jira/browse/KAFKA-9441
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>
> Using producer per thread in combination with EOS, it's not possible any 
> longer to commit individual task independently (as done currently).
> We need to refactor StreamsThread, to commit all tasks at the same time for 
> the new model.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9451) Pass consumer group metadata to producer on commit

2020-01-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17026932#comment-17026932
 ] 

ASF GitHub Bot commented on KAFKA-9451:
---

mjsax commented on pull request #8014: KAFKA-9451: Pass group metadata into 
producer in Kafka Streams
URL: https://github.com/apache/kafka/pull/8014
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Pass consumer group metadata to producer on commit
> --
>
> Key: KAFKA-9451
> URL: https://issues.apache.org/jira/browse/KAFKA-9451
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>
> Using producer per thread EOS design, we need to pass the consumer group 
> metadata into `producer.sendOffsetsToTransaction()` to use the new consumer 
> group coordinator fenchning mechanism. We should also reduce the default 
> transaction timeout to 10 seconds (compare the KIP for details).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-6020) Broker side filtering

2020-01-30 Thread DEAN JAIN (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17026910#comment-17026910
 ] 

DEAN JAIN edited comment on KAFKA-6020 at 1/30/20 6:43 PM:
---

its been almost an year and we are looking forward to this feature, any 
updates, just let us know even if there is any plan in near future for this ???

 

I liked this approach, any thoughts on adopting something similar ??

[https://github.com/flipkart-incubator/kafka-filtering]

 


was (Author: deanj):
its been almost an year and we are looking forward to this feature, any 
updates, just let us know even if there is any plan in near future for this ???

> Broker side filtering
> -
>
> Key: KAFKA-6020
> URL: https://issues.apache.org/jira/browse/KAFKA-6020
> Project: Kafka
>  Issue Type: New Feature
>  Components: consumer
>Reporter: Pavel Micka
>Priority: Major
>  Labels: needs-kip
>
> Currently, it is not possible to filter messages on broker side. Filtering 
> messages on broker side is convenient for filter with very low selectivity 
> (one message in few thousands). In my case it means to transfer several GB of 
> data to consumer, throw it away, take one message and do it again...
> While I understand that filtering by message body is not feasible (for 
> performance reasons), I propose to filter just by message key prefix. This 
> can be achieved even without any deserialization, as the prefix to be matched 
> can be passed as an array (hence the broker would do just array prefix 
> compare).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-6020) Broker side filtering

2020-01-30 Thread DEAN JAIN (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17026910#comment-17026910
 ] 

DEAN JAIN commented on KAFKA-6020:
--

its been almost an year and we are looking forward to this feature, any 
updates, just let us know even if there is any plan in near future for this ???

> Broker side filtering
> -
>
> Key: KAFKA-6020
> URL: https://issues.apache.org/jira/browse/KAFKA-6020
> Project: Kafka
>  Issue Type: New Feature
>  Components: consumer
>Reporter: Pavel Micka
>Priority: Major
>  Labels: needs-kip
>
> Currently, it is not possible to filter messages on broker side. Filtering 
> messages on broker side is convenient for filter with very low selectivity 
> (one message in few thousands). In my case it means to transfer several GB of 
> data to consumer, throw it away, take one message and do it again...
> While I understand that filtering by message body is not feasible (for 
> performance reasons), I propose to filter just by message key prefix. This 
> can be achieved even without any deserialization, as the prefix to be matched 
> can be passed as an array (hence the broker would do just array prefix 
> compare).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9483) Add Scala KStream#toTable to the Streams DSL

2020-01-30 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-9483:
---
Component/s: streams

> Add Scala KStream#toTable to the Streams DSL
> 
>
> Key: KAFKA-9483
> URL: https://issues.apache.org/jira/browse/KAFKA-9483
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: highluck
>Assignee: highluck
>Priority: Minor
>
> [KIP-523: Add KStream#toTable to the Streams 
> DSL|https://cwiki.apache.org/confluence/display/KAFKA/KIP-523%3A+Add+KStream%23toTable+to+the+Streams+DSL?src=jira]
>  
> I am trying to add the same function to scalar



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8162) IBM JDK Class not found error when handling SASL authentication exception

2020-01-30 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-8162.
---
Fix Version/s: 2.5.0
   Resolution: Fixed

> IBM JDK Class not found error when handling SASL authentication exception
> -
>
> Key: KAFKA-8162
> URL: https://issues.apache.org/jira/browse/KAFKA-8162
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
> Environment: Any with IBM JDK 8 SR5 FP10
>Reporter: Arkadiusz Firus
>Assignee: Edoardo Comar
>Priority: Major
> Fix For: 2.5.0
>
>
> When there is a problem with SASL authentication then enum KerberosError is 
> being used to retrieve the error code. When IBM JDK is being used it tries to 
> load a class com.ibm.security.krb5.internal.KrbException which is not present 
> in all IBM JDK versions. This leads to NoClassDefFoundError which is not 
> handled.
> I tested it on:
>  java version "1.8.0_161"
>  Java(TM) SE Runtime Environment (build 8.0.5.10 - 
> pxa6480sr5fp10-20180214_01(SR5 FP10))
>  IBM J9 VM (build 2.9, JRE 1.8.0 Linux amd64-64 Compressed References 
> 20180208_378436 (JIT enabled, AOT enabled)
> In this version of JDK class KrbException is in package com.ibm.security.krb5 
> (without internal). So the fully class name is: 
> com.ibm.security.krb5.KrbException
> Full stack trace from the logs:
> [2019-03-27 06:50:00,113] ERROR Processor got uncaught exception. 
> (kafka.network.Processor)
> java.lang.NoClassDefFoundError: 
> org.apache.kafka.common.security.kerberos.KerberosError (initialization 
> failure)
>     at 
> java.lang.J9VMInternals.initializationAlreadyFailed(J9VMInternals.java:96)
>     at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:384)
>     at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:256)
>     at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:132)
>     at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:532)
>     at org.apache.kafka.common.network.Selector.poll(Selector.java:467)
>     at kafka.network.Processor.poll(SocketServer.scala:689)
>     at kafka.network.Processor.run(SocketServer.scala:594)
>     at java.lang.Thread.run(Thread.java:811)
> Caused by: org.apache.kafka.common.KafkaException: Kerberos exceptions could 
> not be initialized
>     at 
> org.apache.kafka.common.security.kerberos.KerberosError.(KerberosError.java:59)
>     ... 8 more
> Caused by: java.lang.ClassNotFoundException: 
> com.ibm.security.krb5.internal.KrbException
>     at java.lang.Class.forNameImpl(Native Method)
>     at java.lang.Class.forName(Class.java:297)
>     at 
> org.apache.kafka.common.security.kerberos.KerberosError.(KerberosError.java:53)
>     ... 8 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8162) IBM JDK Class not found error when handling SASL authentication exception

2020-01-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17026882#comment-17026882
 ] 

ASF GitHub Bot commented on KAFKA-8162:
---

mimaison commented on pull request #6524: KAFKA-8162: IBM JDK Class not found 
error when handling SASL
URL: https://github.com/apache/kafka/pull/6524
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> IBM JDK Class not found error when handling SASL authentication exception
> -
>
> Key: KAFKA-8162
> URL: https://issues.apache.org/jira/browse/KAFKA-8162
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
> Environment: Any with IBM JDK 8 SR5 FP10
>Reporter: Arkadiusz Firus
>Assignee: Edoardo Comar
>Priority: Major
>
> When there is a problem with SASL authentication then enum KerberosError is 
> being used to retrieve the error code. When IBM JDK is being used it tries to 
> load a class com.ibm.security.krb5.internal.KrbException which is not present 
> in all IBM JDK versions. This leads to NoClassDefFoundError which is not 
> handled.
> I tested it on:
>  java version "1.8.0_161"
>  Java(TM) SE Runtime Environment (build 8.0.5.10 - 
> pxa6480sr5fp10-20180214_01(SR5 FP10))
>  IBM J9 VM (build 2.9, JRE 1.8.0 Linux amd64-64 Compressed References 
> 20180208_378436 (JIT enabled, AOT enabled)
> In this version of JDK class KrbException is in package com.ibm.security.krb5 
> (without internal). So the fully class name is: 
> com.ibm.security.krb5.KrbException
> Full stack trace from the logs:
> [2019-03-27 06:50:00,113] ERROR Processor got uncaught exception. 
> (kafka.network.Processor)
> java.lang.NoClassDefFoundError: 
> org.apache.kafka.common.security.kerberos.KerberosError (initialization 
> failure)
>     at 
> java.lang.J9VMInternals.initializationAlreadyFailed(J9VMInternals.java:96)
>     at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:384)
>     at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:256)
>     at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:132)
>     at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:532)
>     at org.apache.kafka.common.network.Selector.poll(Selector.java:467)
>     at kafka.network.Processor.poll(SocketServer.scala:689)
>     at kafka.network.Processor.run(SocketServer.scala:594)
>     at java.lang.Thread.run(Thread.java:811)
> Caused by: org.apache.kafka.common.KafkaException: Kerberos exceptions could 
> not be initialized
>     at 
> org.apache.kafka.common.security.kerberos.KerberosError.(KerberosError.java:59)
>     ... 8 more
> Caused by: java.lang.ClassNotFoundException: 
> com.ibm.security.krb5.internal.KrbException
>     at java.lang.Class.forNameImpl(Native Method)
>     at java.lang.Class.forName(Class.java:297)
>     at 
> org.apache.kafka.common.security.kerberos.KerberosError.(KerberosError.java:53)
>     ... 8 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9483) Add Scala KStream#toTable to the Streams DSL

2020-01-30 Thread highluck (Jira)
highluck created KAFKA-9483:
---

 Summary: Add Scala KStream#toTable to the Streams DSL
 Key: KAFKA-9483
 URL: https://issues.apache.org/jira/browse/KAFKA-9483
 Project: Kafka
  Issue Type: Improvement
Reporter: highluck
Assignee: highluck


[KIP-523: Add KStream#toTable to the Streams 
DSL|https://cwiki.apache.org/confluence/display/KAFKA/KIP-523%3A+Add+KStream%23toTable+to+the+Streams+DSL?src=jira]

 

I am trying to add the same function to scalar



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9474) Kafka RPC protocol should support type 'double'

2020-01-30 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-9474.

Resolution: Fixed

> Kafka RPC protocol should support type 'double'
> ---
>
> Key: KAFKA-9474
> URL: https://issues.apache.org/jira/browse/KAFKA-9474
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Brian Byrne
>Assignee: Brian Byrne
>Priority: Minor
> Fix For: 2.5.0
>
>
> Should be fairly straightforward. Useful for KIP-546.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9474) Kafka RPC protocol should support type 'double'

2020-01-30 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-9474:
---
Fix Version/s: 2.5.0

> Kafka RPC protocol should support type 'double'
> ---
>
> Key: KAFKA-9474
> URL: https://issues.apache.org/jira/browse/KAFKA-9474
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Brian Byrne
>Assignee: Brian Byrne
>Priority: Minor
> Fix For: 2.5.0
>
>
> Should be fairly straightforward. Useful for KIP-546.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9408) Use StandardCharsets UTF-8 instead of UTF-8 Name

2020-01-30 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-9408.

Fix Version/s: 2.5.0
   Resolution: Fixed

> Use StandardCharsets UTF-8 instead of UTF-8 Name
> 
>
> Key: KAFKA-9408
> URL: https://issues.apache.org/jira/browse/KAFKA-9408
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: David Mollitor
>Priority: Minor
> Fix For: 2.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9252) Kafka Connect fails to create connector if single-broker Kafka cluster is configured for offsets.topic.replication.factor=3

2020-01-30 Thread Behrouz (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17026713#comment-17026713
 ] 

Behrouz commented on KAFKA-9252:


I have same problem with *kafka connect 5.3.1* and 
*offsets.topic.replication.factor = 1*

> Kafka Connect fails to create connector if single-broker Kafka cluster is 
> configured for offsets.topic.replication.factor=3
> ---
>
> Key: KAFKA-9252
> URL: https://issues.apache.org/jira/browse/KAFKA-9252
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.3.1
>Reporter: Robin Moffatt
>Priority: Minor
>
> If I mis-configure my *single* Kafka broker with 
> `offsets.topic.replication.factor=3` (the default), Kafka Connect will start 
> up absolutely fine (Kafka Connect started in the log file, `/connectors` 
> endpoint returns HTTP 200). But if I try to create a connector, it 
> (eventually) returns
> {code:java}
> {"error_code":500,"message":"Request timed out"}{code}
> There's no error in the Kafka Connect worker log at INFO level. More details: 
> [https://rmoff.net/2019/11/29/kafka-connect-request-timed-out/]
> This could be improved. Either at startup ensure that the Kafka consumer 
> offsets topic is available and not startup if it's not, or at least log why 
> the connector failed to be created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9013) Flaky Test MirrorConnectorsIntegrationTest#testReplication

2020-01-30 Thread Ryanne Dolan (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17026700#comment-17026700
 ] 

Ryanne Dolan commented on KAFKA-9013:
-

Sounds reasonable. I'll take a look as well.

> Flaky Test MirrorConnectorsIntegrationTest#testReplication
> --
>
> Key: KAFKA-9013
> URL: https://issues.apache.org/jira/browse/KAFKA-9013
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Bruno Cadonna
>Priority: Major
>  Labels: flaky-test
>
> h1. Stacktrace:
> {code:java}
> java.lang.AssertionError: Condition not met within timeout 2. Offsets not 
> translated downstream to primary cluster.
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:377)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:354)
>   at 
> org.apache.kafka.connect.mirror.MirrorConnectorsIntegrationTest.testReplication(MirrorConnectorsIntegrationTest.java:239)
> {code}
> h1. Standard Error
> {code}
> Standard Error
> Oct 09, 2019 11:32:00 PM org.glassfish.jersey.internal.inject.Providers 
> checkProviderRuntime
> WARNING: A provider 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource 
> registered in SERVER runtime does not implement any provider interfaces 
> applicable in the SERVER runtime. Due to constraint configuration problems 
> the provider 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource will 
> be ignored. 
> Oct 09, 2019 11:32:00 PM org.glassfish.jersey.internal.inject.Providers 
> checkProviderRuntime
> WARNING: A provider 
> org.apache.kafka.connect.runtime.rest.resources.LoggingResource registered in 
> SERVER runtime does not implement any provider interfaces applicable in the 
> SERVER runtime. Due to constraint configuration problems the provider 
> org.apache.kafka.connect.runtime.rest.resources.LoggingResource will be 
> ignored. 
> Oct 09, 2019 11:32:00 PM org.glassfish.jersey.internal.inject.Providers 
> checkProviderRuntime
> WARNING: A provider 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource registered 
> in SERVER runtime does not implement any provider interfaces applicable in 
> the SERVER runtime. Due to constraint configuration problems the provider 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource will be 
> ignored. 
> Oct 09, 2019 11:32:00 PM org.glassfish.jersey.internal.inject.Providers 
> checkProviderRuntime
> WARNING: A provider 
> org.apache.kafka.connect.runtime.rest.resources.RootResource registered in 
> SERVER runtime does not implement any provider interfaces applicable in the 
> SERVER runtime. Due to constraint configuration problems the provider 
> org.apache.kafka.connect.runtime.rest.resources.RootResource will be ignored. 
> Oct 09, 2019 11:32:01 PM org.glassfish.jersey.internal.Errors logErrors
> WARNING: The following warnings have been detected: WARNING: The 
> (sub)resource method listLoggers in 
> org.apache.kafka.connect.runtime.rest.resources.LoggingResource contains 
> empty path annotation.
> WARNING: The (sub)resource method listConnectors in 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains 
> empty path annotation.
> WARNING: The (sub)resource method createConnector in 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains 
> empty path annotation.
> WARNING: The (sub)resource method listConnectorPlugins in 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource 
> contains empty path annotation.
> WARNING: The (sub)resource method serverInfo in 
> org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty 
> path annotation.
> Oct 09, 2019 11:32:02 PM org.glassfish.jersey.internal.inject.Providers 
> checkProviderRuntime
> WARNING: A provider 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource 
> registered in SERVER runtime does not implement any provider interfaces 
> applicable in the SERVER runtime. Due to constraint configuration problems 
> the provider 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource will 
> be ignored. 
> Oct 09, 2019 11:32:02 PM org.glassfish.jersey.internal.inject.Providers 
> checkProviderRuntime
> WARNING: A provider 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource registered 
> in SERVER runtime does not implement any provider interfaces applicable in 
> the SERVER runtime. Due to constraint configuration problems the provider 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource will be 
> ignored. 
> Oct 09, 2019 11:32:02 PM org.glassfish.jersey.internal.inject.Providers 
> checkProviderRuntime
> WARNING: A provider 
> org.apache.kafka.connect.runtime

[jira] [Commented] (KAFKA-9408) Use StandardCharsets UTF-8 instead of UTF-8 Name

2020-01-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17026663#comment-17026663
 ] 

ASF GitHub Bot commented on KAFKA-9408:
---

ijuma commented on pull request #7940: KAFKA-9408: Use StandardCharsets UTF-8 
instead of UTF-8 Name
URL: https://github.com/apache/kafka/pull/7940
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Use StandardCharsets UTF-8 instead of UTF-8 Name
> 
>
> Key: KAFKA-9408
> URL: https://issues.apache.org/jira/browse/KAFKA-9408
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: David Mollitor
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9474) Kafka RPC protocol should support type 'double'

2020-01-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17026659#comment-17026659
 ] 

ASF GitHub Bot commented on KAFKA-9474:
---

ijuma commented on pull request #8012: KAFKA-9474: Adds 'float64' to the RPC 
protocol types.
URL: https://github.com/apache/kafka/pull/8012
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Kafka RPC protocol should support type 'double'
> ---
>
> Key: KAFKA-9474
> URL: https://issues.apache.org/jira/browse/KAFKA-9474
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Brian Byrne
>Assignee: Brian Byrne
>Priority: Minor
>
> Should be fairly straightforward. Useful for KIP-546.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9458) Kafka crashed in windows environment

2020-01-30 Thread hirik (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hirik updated KAFKA-9458:
-
Labels: windows  (was: )

> Kafka crashed in windows environment
> 
>
> Key: KAFKA-9458
> URL: https://issues.apache.org/jira/browse/KAFKA-9458
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 2.4.0
> Environment: Windows Server 2019
>Reporter: hirik
>Priority: Critical
>  Labels: windows
> Fix For: 2.5.0
>
> Attachments: logs.zip
>
>
> Hi,
> while I was trying to validate Kafka retention policy, Kafka Server crashed 
> with below exception trace. 
> [2020-01-21 17:10:40,475] INFO [Log partition=test1-3, 
> dir=C:\Users\Administrator\Downloads\kafka\bin\windows\..\..\data\kafka] 
> Rolled new log segment at offset 1 in 52 ms. (kafka.log.Log)
> [2020-01-21 17:10:40,484] ERROR Error while deleting segments for test1-3 in 
> dir C:\Users\Administrator\Downloads\kafka\bin\windows\..\..\data\kafka 
> (kafka.server.LogDirFailureChannel)
> java.nio.file.FileSystemException: 
> C:\Users\Administrator\Downloads\kafka\bin\windows\..\..\data\kafka\test1-3\.timeindex
>  -> 
> C:\Users\Administrator\Downloads\kafka\bin\windows\..\..\data\kafka\test1-3\.timeindex.deleted:
>  The process cannot access the file because it is being used by another 
> process.
> at 
> java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
>  at 
> java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
>  at java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:395)
>  at 
> java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:292)
>  at java.base/java.nio.file.Files.move(Files.java:1425)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:795)
>  at kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:209)
>  at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:497)
>  at kafka.log.Log.$anonfun$deleteSegmentFiles$1(Log.scala:2206)
>  at kafka.log.Log.$anonfun$deleteSegmentFiles$1$adapted(Log.scala:2206)
>  at scala.collection.immutable.List.foreach(List.scala:305)
>  at kafka.log.Log.deleteSegmentFiles(Log.scala:2206)
>  at kafka.log.Log.removeAndDeleteSegments(Log.scala:2191)
>  at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1700)
>  at scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.scala:17)
>  at kafka.log.Log.maybeHandleIOException(Log.scala:2316)
>  at kafka.log.Log.deleteSegments(Log.scala:1691)
>  at kafka.log.Log.deleteOldSegments(Log.scala:1686)
>  at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1763)
>  at kafka.log.Log.deleteOldSegments(Log.scala:1753)
>  at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:982)
>  at kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:979)
>  at scala.collection.immutable.List.foreach(List.scala:305)
>  at kafka.log.LogManager.cleanupLogs(LogManager.scala:979)
>  at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:403)
>  at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:116)
>  at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:65)
>  at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>  at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
>  at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  at java.base/java.lang.Thread.run(Thread.java:830)
>  Suppressed: java.nio.file.FileSystemException: 
> C:\Users\Administrator\Downloads\kafka\bin\windows\..\..\data\kafka\test1-3\.timeindex
>  -> 
> C:\Users\Administrator\Downloads\kafka\bin\windows\..\..\data\kafka\test1-3\.timeindex.deleted:
>  The process cannot access the file because it is being used by another 
> process.
> at 
> java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
>  at 
> java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
>  at java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:309)
>  at 
> java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:292)
>  at java.base/java.nio.file.Files.move(Files.java:1425)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:792)
>  ... 27 more
> [2020-01-21 17:10:40,495] INFO [ReplicaManager broker=0] Stopping serving 
> replicas in dir 
> C:\Users\Administrator\Down

[jira] [Resolved] (KAFKA-9360) emitting checkpoint and heartbeat set to false will not disable the activity in their SourceTask

2020-01-30 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-9360.
---
Resolution: Fixed

> emitting checkpoint and heartbeat set to false will not disable the activity 
> in their SourceTask
> 
>
> Key: KAFKA-9360
> URL: https://issues.apache.org/jira/browse/KAFKA-9360
> Project: Kafka
>  Issue Type: Improvement
>  Components: mirrormaker
>Affects Versions: 2.4.0
>Reporter: Ning Zhang
>Assignee: Ning Zhang
>Priority: Major
> Fix For: 2.5.0
>
> Attachments: Screen Shot 2020-01-02 at 2.55.38 PM.png, Screen Shot 
> 2020-01-02 at 3.18.23 PM.png
>
>
> `emit.heartbeats.enabled` and `emit.checkpoints.enabled` are supposed to be 
> the knobs to control if the heartbeat message or checkpoint message will be 
> sent or not to the topics respectively. In our experiments, setting them to 
> false will not suspend the activity in their SourceTasks, e.g. 
> MirrorHeartbeatTask, MirrorCheckpointTask.
> The observations are, when setting those knobs to false, huge volume of 
> `SourceRecord` are being sent without interval, causing significantly high 
> CPU usage of MirrorMaker 2 instance, GC time and congesting the single 
> partition of the heartbeat topic and checkpoint topic.
> The proposed fix in the following PR is to (1) explicitly check if `interval` 
> is set to negative (e.g. -1), when the `emit.heartbeats.enabled` or 
> `emit.checkpoints.enabled` is off. (2) if `interval` is indeed set to 
> negative, put the thread in sleep mode for a while (e.g. 5 seconds) and 
> return null, in orderto prevent it from (1) hogging the cpu, (2) sending 
> heartbeat or checkpoint messages to Kafka topics.
> PR link: https://github.com/apache/kafka/pull/7887



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9360) emitting checkpoint and heartbeat set to false will not disable the activity in their SourceTask

2020-01-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17026575#comment-17026575
 ] 

ASF GitHub Bot commented on KAFKA-9360:
---

mimaison commented on pull request #7887: KAFKA-9360: Fix heartbeat and 
checkpoint emission can not turn off
URL: https://github.com/apache/kafka/pull/7887
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> emitting checkpoint and heartbeat set to false will not disable the activity 
> in their SourceTask
> 
>
> Key: KAFKA-9360
> URL: https://issues.apache.org/jira/browse/KAFKA-9360
> Project: Kafka
>  Issue Type: Improvement
>  Components: mirrormaker
>Affects Versions: 2.4.0
>Reporter: Ning Zhang
>Assignee: Ning Zhang
>Priority: Major
> Fix For: 2.5.0
>
> Attachments: Screen Shot 2020-01-02 at 2.55.38 PM.png, Screen Shot 
> 2020-01-02 at 3.18.23 PM.png
>
>
> `emit.heartbeats.enabled` and `emit.checkpoints.enabled` are supposed to be 
> the knobs to control if the heartbeat message or checkpoint message will be 
> sent or not to the topics respectively. In our experiments, setting them to 
> false will not suspend the activity in their SourceTasks, e.g. 
> MirrorHeartbeatTask, MirrorCheckpointTask.
> The observations are, when setting those knobs to false, huge volume of 
> `SourceRecord` are being sent without interval, causing significantly high 
> CPU usage of MirrorMaker 2 instance, GC time and congesting the single 
> partition of the heartbeat topic and checkpoint topic.
> The proposed fix in the following PR is to (1) explicitly check if `interval` 
> is set to negative (e.g. -1), when the `emit.heartbeats.enabled` or 
> `emit.checkpoints.enabled` is off. (2) if `interval` is indeed set to 
> negative, put the thread in sleep mode for a while (e.g. 5 seconds) and 
> return null, in orderto prevent it from (1) hogging the cpu, (2) sending 
> heartbeat or checkpoint messages to Kafka topics.
> PR link: https://github.com/apache/kafka/pull/7887



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9436) New Kafka Connect SMT for plainText => Struct(or Map)

2020-01-30 Thread whsoul (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

whsoul updated KAFKA-9436:
--
Description: 
I'd like to parse and convert plain text rows to struct(or map) data, and load 
into documented database such as mongoDB, elasticSearch, etc... with SMT

 

For example

 

1. String parse ( with timemillis )
{code:java}
{
   "code" : "dev_kafka_pc001_1580372261372"
   ,"recode1" : "a"
   ,"recode2" : "b" 
}{code}
{code:java}
"transforms": "RegexTransform",
"transforms.RegexTransform.type": 
"org.apache.kafka.connect.transforms.ToStructByRegexTransform$Value",

"transforms.RegexTransform.struct.field": "message",
"transforms.RegexTransform.regex": 
"^(.{3,4})_(.*)_(pc|mw|ios|and)([0-9]{3})_([0-9]{13})" 
"transforms.RegexTransform.mapping": 
"env,serviceId,device,sequence,datetime:TIMEMILLIS"{code}
 

 

2. plain text apache log
{code:java}
"111.61.73.113 - - [08/Aug/2019:18:15:29 +0900] \"OPTIONS 
/api/v1/service_config HTTP/1.1\" 200 - 101989 \"http://local.test.com/\"; 
\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, 
like Gecko) Chrome/75.0.3770.142 Safari/537.36\""
{code}
SMT connect config with regular expression below can easily transform a plain 
text to struct (or map) data.

 
{code:java}
"transforms": "RegexTransform",
"transforms.RegexTransform.type": 
"org.apache.kafka.connect.transforms.ToStructByRegexTransform$Value",

"transforms.RegexTransform.struct.field": "message",
"transforms.RegexTransform.regex": "^([\\d.]+) (\\S+) (\\S+) 
\\[([\\w:/]+\\s[+\\-]\\d{4})\\] \"(GET|POST|OPTIONS|HEAD|PUT|DELETE|PATCH) 
(.+?) (.+?)\" (\\d{3}) ([0-9|-]+) ([0-9|-]+) \"([^\"]+)\" \"([^\"]+)\""

"transforms.RegexTransform.mapping": 
"IP,RemoteUser,AuthedRemoteUser,DateTime,Method,Request,Protocol,Response,BytesSent,Ms:NUMBER,Referrer,UserAgent"
{code}
 

I have PR about this

  was:
I'd like to parse and convert plain text rows to struct(or map) data, and load 
into documented database such as mongoDB, elasticSearch, etc... with SMT

 

For example

 

plain text apache log
{code:java}
"111.61.73.113 - - [08/Aug/2019:18:15:29 +0900] \"OPTIONS 
/api/v1/service_config HTTP/1.1\" 200 - 101989 \"http://local.test.com/\"; 
\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, 
like Gecko) Chrome/75.0.3770.142 Safari/537.36\""
{code}
SMT connect config with regular expression below can easily transform a plain 
text to struct (or map) data.

 
{code:java}
"transforms": "TimestampTopic, RegexTransform",
"transforms.RegexTransform.type": 
"org.apache.kafka.connect.transforms.ToStructByRegexTransform$Value",

"transforms.RegexTransform.struct.field": "message",
"transforms.RegexTransform.regex": "^([\\d.]+) (\\S+) (\\S+) 
\\[([\\w:/]+\\s[+\\-]\\d{4})\\] \"(GET|POST|OPTIONS|HEAD|PUT|DELETE|PATCH) 
(.+?) (.+?)\" (\\d{3}) ([0-9|-]+) ([0-9|-]+) \"([^\"]+)\" \"([^\"]+)\""

"transforms.RegexTransform.mapping": 
"IP,RemoteUser,AuthedRemoteUser,DateTime,Method,Request,Protocol,Response,BytesSent,Ms:NUMBER,Referrer,UserAgent"
{code}
 

I have PR about this


> New Kafka Connect SMT for plainText => Struct(or Map)
> -
>
> Key: KAFKA-9436
> URL: https://issues.apache.org/jira/browse/KAFKA-9436
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: whsoul
>Priority: Major
>
> I'd like to parse and convert plain text rows to struct(or map) data, and 
> load into documented database such as mongoDB, elasticSearch, etc... with SMT
>  
> For example
>  
> 1. String parse ( with timemillis )
> {code:java}
> {
>"code" : "dev_kafka_pc001_1580372261372"
>,"recode1" : "a"
>,"recode2" : "b" 
> }{code}
> {code:java}
> "transforms": "RegexTransform",
> "transforms.RegexTransform.type": 
> "org.apache.kafka.connect.transforms.ToStructByRegexTransform$Value",
> "transforms.RegexTransform.struct.field": "message",
> "transforms.RegexTransform.regex": 
> "^(.{3,4})_(.*)_(pc|mw|ios|and)([0-9]{3})_([0-9]{13})" 
> "transforms.RegexTransform.mapping": 
> "env,serviceId,device,sequence,datetime:TIMEMILLIS"{code}
>  
>  
> 2. plain text apache log
> {code:java}
> "111.61.73.113 - - [08/Aug/2019:18:15:29 +0900] \"OPTIONS 
> /api/v1/service_config HTTP/1.1\" 200 - 101989 \"http://local.test.com/\"; 
> \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, 
> like Gecko) Chrome/75.0.3770.142 Safari/537.36\""
> {code}
> SMT connect config with regular expression below can easily transform a plain 
> text to struct (or map) data.
>  
> {code:java}
> "transforms": "RegexTransform",
> "transforms.RegexTransform.type": 
> "org.apache.kafka.connect.transforms.ToStructByRegexTransform$Value",
> "transforms.RegexTransform.struct.field": "message",
> "transforms.RegexTransform.regex": "^([\\d.]+) (\\S+) (\\S+) 
> \\[([\\w