[GitHub] [kafka] lkokhreidze commented on pull request #11923: KAFKA-6718 / Documentation

2022-03-21 Thread GitBox


lkokhreidze commented on pull request #11923:
URL: https://github.com/apache/kafka/pull/11923#issuecomment-1074801863


   Thanks @showuon 
   I've addressed your comments.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] lkokhreidze commented on a change in pull request #11923: KAFKA-6718 / Documentation

2022-03-21 Thread GitBox


lkokhreidze commented on a change in pull request #11923:
URL: https://github.com/apache/kafka/pull/11923#discussion_r831823225



##
File path: docs/streams/architecture.html
##
@@ -161,6 +161,12 @@ 

[GitHub] [kafka] LiamClarkeNZ commented on pull request #11475: KAFKA-7077: Use default producer settings in Connect Worker

2022-03-21 Thread GitBox


LiamClarkeNZ commented on pull request #11475:
URL: https://github.com/apache/kafka/pull/11475#issuecomment-1074795068


   Cheers Luke, will do :)
   
   On Tue, 22 Mar 2022 at 19:43, Luke Chen ***@***.***> wrote:
   
   > Thanks @LiamClarkeNZ  !
   >
   > The wording overall LGTM! Please submit a PR when available. We can
   > comment on the wording there.
   >
   > Actually, do we need to add similar documentation to the upgrade
   > documentation for 3.0.1, 3.1.1 also?
   >
   > Yes, please add them to 3.0.1 and 3.1.1, too.
   > Thanks.
   >
   > —
   > Reply to this email directly, view it on GitHub
   > , or
   > unsubscribe
   > 

   > .
   > You are receiving this because you were mentioned.Message ID:
   > ***@***.***>
   >
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on pull request #11475: KAFKA-7077: Use default producer settings in Connect Worker

2022-03-21 Thread GitBox


showuon commented on pull request #11475:
URL: https://github.com/apache/kafka/pull/11475#issuecomment-1074794074


   Thanks @LiamClarkeNZ !
   
   The wording overall LGTM! Please submit a PR when available. We can comment 
on the wording there.
   
   > Actually, do we need to add similar documentation to the upgrade
   documentation for 3.0.1, 3.1.1 also?
   
   Yes, please add them to 3.0.1 and 3.1.1, too. 
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] LiamClarkeNZ commented on pull request #11475: KAFKA-7077: Use default producer settings in Connect Worker

2022-03-21 Thread GitBox


LiamClarkeNZ commented on pull request #11475:
URL: https://github.com/apache/kafka/pull/11475#issuecomment-1074783688


   Hi Luke, Ismael et al,
   
   No worries on adding a comment to that effect in the upgrade docs - but I'm
   not sure how to word it, as the idempotent producer was never explicitly
   disabled by default in the Kafka Connect Worker. I realise, based on Luke's
   recent work to fix KAFKA-13598, that the intended enabling of idempotence
   by default with Kafka 3.0 didn't happen either.
   
   So while I just removed the setting of max inflight requests to 1, allowing
   the default of 5 to be used, . (I also removed the explicit setting of
   acks=all for the same reason), I guess this will be the first version where
   Kafka Connect uses idempotent producers by default. So I was thinking of
   wording it like this?
   
   
   *Kafka Connect now uses [idempotent producers](http://link.to.docs.here
   ) by default, and now defaults to a maximum of 5
   inflight requests (five is the uppermost limit supported by idempotent
   producers). You can override producer settings controlling this behaviour
   using the properties producer.enable.idempotence and
   producer.max.inflight.requests*
   Actually, do we need to add similar documentation to the upgrade
   documentation for 3.0.1, 3.1.1 also? As KC workers based on those version
   will now be defaulting to idempotent enabled. The workers in those versions
   will still have max.inflight.requests set to 1, but it could be set to 5 by
   producer.* overrides, I could add commentary to that regard.
   
   Thanks,
   
   Liam
   
   On Tue, 22 Mar 2022 at 19:07, Luke Chen ***@***.***> wrote:
   
   > @kkonstantine  , thanks for the comment.
   > I agree we add comments in the code to say the idempotent producer is
   > enabled by default.
   > cc @LiamClarkeNZ 
   >
   > Thank you.
   >
   > —
   > Reply to this email directly, view it on GitHub
   > , or
   > unsubscribe
   > 

   > .
   > You are receiving this because you were mentioned.Message ID:
   > ***@***.***>
   >
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-13689) AbstractConfig log print information is incorrect

2022-03-21 Thread RivenSun (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17510244#comment-17510244
 ] 

RivenSun commented on KAFKA-13689:
--

Hi [~guozhang] 
1. We all agree that users can get all their custom configurations in their own 
plugin modules by passing the `configs` variable through the 
`ChannelBuilders#channelBuilderConfigs` method.
In fact, the initialization methods of all KafKaClient (such as KafkaConsumer) 
that contain AbstractConfig parameters {*}are not public{*}, so users cannot 
directly access the corresponding type of AbstractConfig, and thus will not 
directly call the methods in AbstractConfig.
2. The `configs` variable mentioned above is actually of type 
AbstractConfig.RecordingMap. If users call configs.get(key) to retrieve their 
own configuration, the RecordingMap.get() method will eventually call the 
*AbstractConfig.ignore(key)* method, so `used` will contain user-defined 
unknown configuration.

> AbstractConfig log print information is incorrect
> -
>
> Key: KAFKA-13689
> URL: https://issues.apache.org/jira/browse/KAFKA-13689
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 3.0.0
>Reporter: RivenSun
>Assignee: RivenSun
>Priority: Major
> Fix For: 3.2.0
>
>
> h1. 1.Example
> KafkaClient version is 3.1.0, KafkaProducer init properties:
>  
> {code:java}
> Properties props = new Properties();
> props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, false);
> props.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 60003);{code}
>  
>  
> Partial log of KafkaProducer initialization:
> {code:java}
>     ssl.truststore.location = C:\Personal 
> File\documents\KafkaSSL\client.truststore.jks
>     ssl.truststore.password = [hidden]
>     ssl.truststore.type = JKS
>     transaction.timeout.ms = 60003
>     transactional.id = null
>     value.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer[main] INFO 
> org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully 
> logged in.
> [main] WARN org.apache.kafka.clients.producer.ProducerConfig - The 
> configuration 'transaction.timeout.ms' was supplied but isn't a known config.
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.1.0
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 
> 37edeed0777bacb3
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 
> 1645602332999 {code}
> From the above log, you can see that KafkaProducer has applied the user's 
> configuration, {*}transaction.timeout.ms=60003{*}, the default value of this 
> configuration is 6.
> But we can see another line of log:
> [main] WARN org.apache.kafka.clients.producer.ProducerConfig - The 
> configuration *'transaction.timeout.ms'* was supplied but isn't a 
> *{color:#ff}known{color}* config.
>  
> h1. 2.RootCause:
> 1) ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG is set to {*}false{*}.
> So the configurations related to the KafkaProducer transaction will not be 
> requested.
> See the source code: KafkaProducer#configureTransactionState(...) .
> 2) AbstractConfig#logUnused() -> AbstractConfig#unused()
> {code:java}
> public Set unused() {
> Set keys = new HashSet<>(originals.keySet());
> keys.removeAll(used);
> return keys;
> } {code}
> If a configuration has not been requested, the configuration will not be put 
> into the used variable. SourceCode see as below:
> AbstractConfig#get(String key)
>  
> {code:java}
> protected Object get(String key) {
> if (!values.containsKey(key))
> throw new ConfigException(String.format("Unknown configuration '%s'", 
> key));
> used.add(key);
> return values.get(key);
> } {code}
> h1.  
> h1. Solution:
> 1. AbstractConfig#logUnused() method
> Modify the log printing information of this method,and the unused 
> configuration log print level can be changed to {*}INFO{*}, what do you think?
> {code:java}
> /**
>  * Log infos for any unused configurations
>  */
> public void logUnused() {     for (String key : unused())
> log.info("The configuration '{}' was supplied but isn't a used 
> config.", key);
> }{code}
>  
>  
> 2. AbstractConfig provides two new methods: logUnknown() and unknown()
> {code:java}
> /**
>  * Log warnings for any unknown configurations
>  */
> public void logUnknown() {
> for (String key : unknown())
> log.warn("The configuration '{}' was supplied but isn't a known 
> config.", key);
> } {code}
>  
> {code:java}
> public Set unknown() {
> Set keys = new HashSet<>(originals.keySet());
> keys.removeAll(values.keySet());
> return keys;
> } {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] showuon commented on pull request #11475: KAFKA-7077: Use default producer settings in Connect Worker

2022-03-21 Thread GitBox


showuon commented on pull request #11475:
URL: https://github.com/apache/kafka/pull/11475#issuecomment-1074774576


   @kkonstantine , thanks for the comment. I agree we add comments in the code 
to say the idempotent producer is enabled by default.
   cc @LiamClarkeNZ 
   
   Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] kkonstantine edited a comment on pull request #11475: KAFKA-7077: Use default producer settings in Connect Worker

2022-03-21 Thread GitBox


kkonstantine edited a comment on pull request #11475:
URL: https://github.com/apache/kafka/pull/11475#issuecomment-1074761446


   Thanks for bring this to our attention @ijuma. This change should be fine 
from Connect's perspective. `acks=all` is now implied instead of explicitly 
defined and idempotence has been enabled since 3.0.0 and we had decided not to 
override this setting in Connect. 
   
   The property that would give someone pause is the value for 
`max.in.flight.requests.per.connection` that now is greater than 1. The 
reassurance that this will work comes from 
[KAFKA-5494](https://issues.apache.org/jira/browse/KAFKA-5494) which 
unfortunately is not mentioned in the original 
[KIP-98](https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging)
 (I see it mentioned in the 
[KIP-318](https://cwiki.apache.org/confluence/display/KAFKA/KIP-318%3A+Make+Kafka+Connect+Source+idempotent)
 that is relevant to Connect) and should have been mentioned in this PR, if not 
in the code. 
   
   Having said that, I think it would have been a good idea to leave a comment 
in the code that would say that Connect requires these configs which are now 
enabled by default with the idempotent producer. 
   I also agree that this should go as a notable change in the docs. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] kkonstantine commented on pull request #11475: KAFKA-7077: Use default producer settings in Connect Worker

2022-03-21 Thread GitBox


kkonstantine commented on pull request #11475:
URL: https://github.com/apache/kafka/pull/11475#issuecomment-1074761446


   Thanks for bring this to our attention @ijuma. This change should be fine 
from Connect's perspective. `acks=all` is now implied instead of explicitly 
defined and idempotence has been enabled since 3.0.0 and we had decided not to 
override this setting in Connect. 
   
   The property that would give someone pause is the value for 
`max.in.flight.requests.per.connection` that now is greater than 1. The 
reassurance that this will work comes from 
[KAFKA-5494](https://issues.apache.org/jira/browse/KAFKA-5494) which 
unfortunately is not mentioned in the original 
[KIP-98](https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging)
 (I see it mentioned in the 
[KIP-318](https://cwiki.apache.org/confluence/display/KAFKA/KIP-318%3A+Make+Kafka+Connect+Source+idempotent)
 that is relevant to Connect). 
   
   Having said that, I think it would have been a good idea to leave a comment 
in the code that would say that Connect requires these configs which are now 
enabled by default with the idempotent producer. 
   I also agree that this should go as a notable change in the docs. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] vvcephei opened a new pull request #11926: KAFKA-13714: Fix cache flush position

2022-03-21 Thread GitBox


vvcephei opened a new pull request #11926:
URL: https://github.com/apache/kafka/pull/11926


   The caching store layers were passing down writes into lower store layers 
upon eviction, but not setting the context to the evicted records' context. 
Instead, the context was from whatever unrelated record was being processed at 
the time.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-13689) AbstractConfig log print information is incorrect

2022-03-21 Thread Guozhang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17510209#comment-17510209
 ] 

Guozhang Wang commented on KAFKA-13689:
---

Hello [~RivenSun] I agree with you except some pondering about this statement:

4. `used` will also contain unknownConfigKeys after users retrieve their own 
custom  configuration, so `used` will not necessarily be a subset of `values`.

Since `used` will only be updated with either `get` or `ignore`, so assume we 
do not call `ignore` at the moment, `get` calls could only be used to retrieve 
those known configs, and unknown configs can only be retrieved from the 
original map itself. So `used` should not contain any unknown configs at all.

With that said, I think we are still in agreement on the two improvement points 
you mentioned in the last message since the `unused()` returned should still 
just have those not-used configs, just that I think those unknown ones would 
not be included.

> AbstractConfig log print information is incorrect
> -
>
> Key: KAFKA-13689
> URL: https://issues.apache.org/jira/browse/KAFKA-13689
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 3.0.0
>Reporter: RivenSun
>Assignee: RivenSun
>Priority: Major
> Fix For: 3.2.0
>
>
> h1. 1.Example
> KafkaClient version is 3.1.0, KafkaProducer init properties:
>  
> {code:java}
> Properties props = new Properties();
> props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, false);
> props.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 60003);{code}
>  
>  
> Partial log of KafkaProducer initialization:
> {code:java}
>     ssl.truststore.location = C:\Personal 
> File\documents\KafkaSSL\client.truststore.jks
>     ssl.truststore.password = [hidden]
>     ssl.truststore.type = JKS
>     transaction.timeout.ms = 60003
>     transactional.id = null
>     value.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer[main] INFO 
> org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully 
> logged in.
> [main] WARN org.apache.kafka.clients.producer.ProducerConfig - The 
> configuration 'transaction.timeout.ms' was supplied but isn't a known config.
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.1.0
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 
> 37edeed0777bacb3
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 
> 1645602332999 {code}
> From the above log, you can see that KafkaProducer has applied the user's 
> configuration, {*}transaction.timeout.ms=60003{*}, the default value of this 
> configuration is 6.
> But we can see another line of log:
> [main] WARN org.apache.kafka.clients.producer.ProducerConfig - The 
> configuration *'transaction.timeout.ms'* was supplied but isn't a 
> *{color:#ff}known{color}* config.
>  
> h1. 2.RootCause:
> 1) ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG is set to {*}false{*}.
> So the configurations related to the KafkaProducer transaction will not be 
> requested.
> See the source code: KafkaProducer#configureTransactionState(...) .
> 2) AbstractConfig#logUnused() -> AbstractConfig#unused()
> {code:java}
> public Set unused() {
> Set keys = new HashSet<>(originals.keySet());
> keys.removeAll(used);
> return keys;
> } {code}
> If a configuration has not been requested, the configuration will not be put 
> into the used variable. SourceCode see as below:
> AbstractConfig#get(String key)
>  
> {code:java}
> protected Object get(String key) {
> if (!values.containsKey(key))
> throw new ConfigException(String.format("Unknown configuration '%s'", 
> key));
> used.add(key);
> return values.get(key);
> } {code}
> h1.  
> h1. Solution:
> 1. AbstractConfig#logUnused() method
> Modify the log printing information of this method,and the unused 
> configuration log print level can be changed to {*}INFO{*}, what do you think?
> {code:java}
> /**
>  * Log infos for any unused configurations
>  */
> public void logUnused() {     for (String key : unused())
> log.info("The configuration '{}' was supplied but isn't a used 
> config.", key);
> }{code}
>  
>  
> 2. AbstractConfig provides two new methods: logUnknown() and unknown()
> {code:java}
> /**
>  * Log warnings for any unknown configurations
>  */
> public void logUnknown() {
> for (String key : unknown())
> log.warn("The configuration '{}' was supplied but isn't a known 
> config.", key);
> } {code}
>  
> {code:java}
> public Set unknown() {
> Set keys = new HashSet<>(originals.keySet());
> keys.removeAll(values.keySet());
> return keys;
> } {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-13152) Replace "buffered.records.per.partition" with "input.buffer.max.bytes"

2022-03-21 Thread Sagar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sagar Rao updated KAFKA-13152:
--
Fix Version/s: 3.3.0
   (was: 3.2.0)

> Replace "buffered.records.per.partition" with "input.buffer.max.bytes" 
> ---
>
> Key: KAFKA-13152
> URL: https://issues.apache.org/jira/browse/KAFKA-13152
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Sagar Rao
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.3.0
>
>
> The current config "buffered.records.per.partition" controls how many records 
> in maximum to bookkeep, and hence it is exceed we would pause fetching from 
> this partition. However this config has two issues:
> * It's a per-partition config, so the total memory consumed is dependent on 
> the dynamic number of partitions assigned.
> * Record size could vary from case to case.
> And hence it's hard to bound the memory usage for this buffering. We should 
> consider deprecating that config with a global, e.g. "input.buffer.max.bytes" 
> which controls how much bytes in total is allowed to be buffered. This is 
> doable since we buffer the raw records in .



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] vamossagar12 commented on pull request #11796: KAFKA-13152: Replace "buffered.records.per.partition" with "input.buffer.max.bytes"

2022-03-21 Thread GitBox


vamossagar12 commented on pull request #11796:
URL: https://github.com/apache/kafka/pull/11796#issuecomment-1074698895


   Thanks @guozhangwang !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on a change in pull request #11923: KAFKA-6718 / Documentation

2022-03-21 Thread GitBox


showuon commented on a change in pull request #11923:
URL: https://github.com/apache/kafka/pull/11923#discussion_r831722091



##
File path: docs/streams/architecture.html
##
@@ -161,6 +161,12 @@ 

[GitHub] [kafka] fxbing commented on a change in pull request #11912: KAFKA-13752: Uuid compare using equals in java

2022-03-21 Thread GitBox


fxbing commented on a change in pull request #11912:
URL: https://github.com/apache/kafka/pull/11912#discussion_r831720057



##
File path: 
clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java
##
@@ -151,7 +151,7 @@ public Cluster buildCluster() {
 if (metadata.error == Errors.NONE) {
 if (metadata.isInternal)
 internalTopics.add(metadata.topic);
-if (metadata.topicId() != null && metadata.topicId() != 
Uuid.ZERO_UUID) {
+if (metadata.topicId() != null && 
!Uuid.ZERO_UUID.equals(metadata.topicId())) {

Review comment:
   done. PTAL @dajac 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-13751) On the broker side, OAUTHBEARER is not compatible with other SASL mechanisms

2022-03-21 Thread RivenSun (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17510185#comment-17510185
 ] 

RivenSun commented on KAFKA-13751:
--

Hi [~dajac] 

Really bad, I am posting a new question due to the impure value of `Map jaasContexts`. The `jaasContexts` variable is constructed by the 
ChannelBuilders#create method.

When a user uses jaasConfigFile and multiple LoginModules are configured.

Users can configure their own in server.properties:
listener.name.\{listenerName}.\{saslMechanism}.sasl.server.callback.handler.class,

But users cannot configure their

listener.name.\{listenerName}.\{saslMechanism}.sasl.login.class

and

listener.name.\{listenerName}.\{saslMechanism}.sasl.login.callback.handler.class.

Because the `SaslChannelBuilder#createServerCallbackHandlers` method does not 
verify the jaasContext of each mechanism.
But the LoginManager.configuredClassOrDefault method is used in the 
construction of loginClass and loginCallbackClass. This method explicitly 
checks jaasContext.configurationEntries().size().


1. Why do the three listener.name.\{listenerName}.\{saslMechanism}.some.prop 
configurations have different validation logic?
2. The log output in configuredClassOrDefault is also inappropriate. Do we have 
to force the broker to use sasl.jaas.config when using sasl.login.class and 
sasl.login.callback.handler.class configuration? This requirement is not stated 
in the official kafka documentation. Can't use these two configurations when 
the user uses jaasConfigFile?



In short, when constructing Map jaasContexts, whether 
using jaasConfigFile or sasl.jaas.config, *we should ensure that the 
JaasContext of each mechanism is pure and should not contain the loginModule of 
other mechanisms. The loginModule of the mechanism itself in JaasContext should 
only be configured once.*
If so, all issues will be resolved, including KAFKA-13422.

[~rsivaram]  [~ijuma] WDYT?

Thanks.

> On the broker side, OAUTHBEARER is not compatible with other SASL mechanisms
> 
>
> Key: KAFKA-13751
> URL: https://issues.apache.org/jira/browse/KAFKA-13751
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.1
>Reporter: RivenSun
>Priority: Critical
>
> h1. Phenomenon:
>  SASL/OAUTHBEARER, whether implemented by default or customized by the user, 
> is not compatible with other SASL mechanisms.
> h3.  
> case1:
> kafka_server_jaas_oauth.conf
> {code:java}
> KafkaServer {
>   org.apache.kafka.common.security.plain.PlainLoginModule required
>   username="admin"
>   password="admin"
>   user_admin="admin"
>   user_alice="alice"; 
>org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule 
> required;
>org.apache.kafka.common.security.scram.ScramLoginModule required
>   username="admin"
>   password="admin_scram";
> }; {code}
>  server.properties
> {code:java}
> advertised.listeners=SASL_PLAINTEXT://publicIp:8779,SASL_SSL://publicIp:8889,OAUTH://publicIp:8669
>  
> listener.security.protocol.map=INTERNAL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL,OAUTH:SASL_SSL
> sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512,OAUTHBEARER{code}
> Error when starting kafka:
> server.log
> {code:java}
> [2022-03-16 13:18:42,658] ERROR [KafkaServer id=1] Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.KafkaException: java.lang.IllegalArgumentException: 
> Must supply exactly 1 non-null JAAS mechanism configuration (size was 3)
>         at 
> org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:172)
>         at kafka.network.Processor.(SocketServer.scala:724)
>         at kafka.network.SocketServer.newProcessor(SocketServer.scala:367)
>         at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:252)
>         at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:251)
>         at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:214)
>         at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:211)
>         at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
>         at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
>         at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
>         at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:211)
>         at kafka.network.SocketServer.startup(SocketServer.scala:122)
>         at kafka.server.KafkaServer.startup(KafkaServer.scala:266)
>         at 
> 

[GitHub] [kafka] dengziming commented on pull request #11173: KAFKA-13509: Support max timestamp in GetOffsetShell

2022-03-21 Thread GitBox


dengziming commented on pull request #11173:
URL: https://github.com/apache/kafka/pull/11173#issuecomment-1074654909


   > Could you take a look please?
   Sure, I will check it soon.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on a change in pull request #11829: [RFC][2/N][emit final] add processor metadata to be committed with offset

2022-03-21 Thread GitBox


mjsax commented on a change in pull request #11829:
URL: https://github.com/apache/kafka/pull/11829#discussion_r831688520



##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/ProcessorMetadata.java
##
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.processor.internals;
+
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.utils.Bytes;
+
+/**
+ * ProcessorMetadata to be access and populated by processor node. This will 
be committed along with
+ * offset
+ */
+public class ProcessorMetadata {
+
+// Does this need to be thread safe? I think not since there's one per task
+private final Map globalMetadata;
+
+public ProcessorMetadata() {
+globalMetadata = new HashMap<>();
+}
+
+public static ProcessorMetadata deserialize(final byte[] 
ProcessorMetadata, final TopicPartition partition) {
+// TODO: deserialize
+return null;
+}
+
+public void merge(final ProcessorMetadata other) {

Review comment:
   > this should be committed to all TopicPartition
   
   Sound like we would commit it redundantly? Why store the same metadata for 
multiple partitions?
   
   > We need to merge the metadata from all TopicPartition to get correct 
globalMetadata in case some commits fail?
   
   Not sure if I can follow. If the commit fails, we would fall back to the 
previous offset including the previous metadata, right?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on a change in pull request #11829: [RFC][2/N][emit final] add processor metadata to be committed with offset

2022-03-21 Thread GitBox


mjsax commented on a change in pull request #11829:
URL: https://github.com/apache/kafka/pull/11829#discussion_r831687972



##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/ProcessorMetadata.java
##
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.processor.internals;
+
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.utils.Bytes;
+
+/**
+ * ProcessorMetadata to be access and populated by processor node. This will 
be committed along with
+ * offset
+ */
+public class ProcessorMetadata {
+
+// Does this need to be thread safe? I think not since there's one per task
+private final Map globalMetadata;

Review comment:
   Might be pre-mature optimization to make it generic. I would stick to 
just what we need. That is also why I think `` might just be good 
enough as we intend to only use it for "emit final".
   
   > So why not pass byte[] directly and serialize them finally to String for 
OffsetAndMetadata?
   
   My point is, that we should avoid to change the format/serialization twice, 
but only once. Either we accept `long` and let the runtime do `long -> String` 
directly. Or we just use `String` and the runtime has nothing to do but to 
concatenate string, and the processor needs to do the `long -> String` 
conversion. In both cases it's a single translation step. Introducing `byte[]` 
add a second translation step: what would be the advantage of having two steps 
instead of one?
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] guozhangwang commented on pull request #11796: KAFKA-13152: Replace "buffered.records.per.partition" with "input.buffer.max.bytes"

2022-03-21 Thread GitBox


guozhangwang commented on pull request #11796:
URL: https://github.com/apache/kafka/pull/11796#issuecomment-1074547962


   Thanks @vamossagar12 . I've merged the PR, and please go ahead and mark the 
ticket / KIP as for 3.3.0.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] guozhangwang merged pull request #11796: KAFKA-13152: Replace "buffered.records.per.partition" with "input.buffer.max.bytes"

2022-03-21 Thread GitBox


guozhangwang merged pull request #11796:
URL: https://github.com/apache/kafka/pull/11796


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on pull request #11475: KAFKA-7077: Use default producer settings in Connect Worker

2022-03-21 Thread GitBox


showuon commented on pull request #11475:
URL: https://github.com/apache/kafka/pull/11475#issuecomment-1074510674


   @ijuma , I see. Thanks for the reminder. Will do.
   @LiamClarkeNZ , please help add a note in 3.2 notable change in 
upgrade.html. Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rondagostino commented on pull request #11691: KAFKA-13598: enable idempotence producer by default and validate the configs

2022-03-21 Thread GitBox


rondagostino commented on pull request #11691:
URL: https://github.com/apache/kafka/pull/11691#issuecomment-1074484675


   This PR broke this system test:
   
   `TC_PATHS="tests/kafkatest/tests/tools/log4j_appender_test.py" bash 
tests/docker/run_tests.sh`
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] cmccabe merged pull request #11913: MINOR: Remove scala KafkaException

2022-03-21 Thread GitBox


cmccabe merged pull request #11913:
URL: https://github.com/apache/kafka/pull/11913


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] cmccabe commented on pull request #11912: KAFKA-13752: Uuid compare using equals in java

2022-03-21 Thread GitBox


cmccabe commented on pull request #11912:
URL: https://github.com/apache/kafka/pull/11912#issuecomment-107703


   I'm surprised that this wasn't caught by spotbugs. Maybe we need to start 
enabling more spotbugs checks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] cmccabe merged pull request #11918: MINOR: Small improvements for KAFKA-13587

2022-03-21 Thread GitBox


cmccabe merged pull request #11918:
URL: https://github.com/apache/kafka/pull/11918


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] cmccabe commented on pull request #11920: [WIP] KAFKA-13672 Race condition in DynamicBrokerConfig

2022-03-21 Thread GitBox


cmccabe commented on pull request #11920:
URL: https://github.com/apache/kafka/pull/11920#issuecomment-1074436997


   Seems like a reasonable approach


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-13587) Implement unclean leader election in KIP-704

2022-03-21 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna updated KAFKA-13587:
--
Fix Version/s: 3.2.0
   (was: 3.3.0)

> Implement unclean leader election in KIP-704
> 
>
> Key: KAFKA-13587
> URL: https://issues.apache.org/jira/browse/KAFKA-13587
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jose Armando Garcia Sancio
>Assignee: Jose Armando Garcia Sancio
>Priority: Major
> Fix For: 3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-13682) Implement auto preferred leader election in KRaft Controller

2022-03-21 Thread Jose Armando Garcia Sancio (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Armando Garcia Sancio updated KAFKA-13682:
---
Fix Version/s: 3.2.0

> Implement auto preferred leader election in KRaft Controller
> 
>
> Key: KAFKA-13682
> URL: https://issues.apache.org/jira/browse/KAFKA-13682
> Project: Kafka
>  Issue Type: Task
>  Components: kraft
>Reporter: Jose Armando Garcia Sancio
>Assignee: Jose Armando Garcia Sancio
>Priority: Major
>  Labels: kip-500
> Fix For: 3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (KAFKA-13682) Implement auto preferred leader election in KRaft Controller

2022-03-21 Thread Jose Armando Garcia Sancio (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Armando Garcia Sancio resolved KAFKA-13682.

Resolution: Fixed

> Implement auto preferred leader election in KRaft Controller
> 
>
> Key: KAFKA-13682
> URL: https://issues.apache.org/jira/browse/KAFKA-13682
> Project: Kafka
>  Issue Type: Task
>  Components: kraft
>Reporter: Jose Armando Garcia Sancio
>Assignee: Jose Armando Garcia Sancio
>Priority: Major
>  Labels: kip-500
> Fix For: 3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (KAFKA-13587) Implement unclean leader election in KIP-704

2022-03-21 Thread Jose Armando Garcia Sancio (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Armando Garcia Sancio resolved KAFKA-13587.

Resolution: Fixed

> Implement unclean leader election in KIP-704
> 
>
> Key: KAFKA-13587
> URL: https://issues.apache.org/jira/browse/KAFKA-13587
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jose Armando Garcia Sancio
>Assignee: Jose Armando Garcia Sancio
>Priority: Major
> Fix For: 3.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] cadonna merged pull request #11925: MINOR: Bump trunk to 3.3.0-SNAPSHOT

2022-03-21 Thread GitBox


cadonna merged pull request #11925:
URL: https://github.com/apache/kafka/pull/11925


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-7632) Support Compression Level

2022-03-21 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna updated KAFKA-7632:
-
Fix Version/s: 3.3.0
   (was: 3.2.0)

> Support Compression Level
> -
>
> Key: KAFKA-7632
> URL: https://issues.apache.org/jira/browse/KAFKA-7632
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.1.0
> Environment: all
>Reporter: Dave Waters
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.3.0
>
>
> The compression level for ZSTD is currently set to use the default level (3), 
> which is a conservative setting that in some use cases eliminates the value 
> that ZSTD provides with improved compression. Each use case will vary, so 
> exposing the level as a producer, broker, and topic configuration setting 
> will allow the user to adjust the level.
> Since it applies to the other compression codecs, we should add the same 
> functionalities to them.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-7739) Kafka Tiered Storage

2022-03-21 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna updated KAFKA-7739:
-
Fix Version/s: 3.3.0
   (was: 3.2.0)

> Kafka Tiered Storage
> 
>
> Key: KAFKA-7739
> URL: https://issues.apache.org/jira/browse/KAFKA-7739
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Harsha
>Assignee: Satish Duggana
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.3.0
>
>
> KIP: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-13587) Implement unclean leader election in KIP-704

2022-03-21 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna updated KAFKA-13587:
--
Fix Version/s: 3.3.0
   (was: 3.2.0)

> Implement unclean leader election in KIP-704
> 
>
> Key: KAFKA-13587
> URL: https://issues.apache.org/jira/browse/KAFKA-13587
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jose Armando Garcia Sancio
>Assignee: Jose Armando Garcia Sancio
>Priority: Major
> Fix For: 3.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (KAFKA-6718) Rack Aware Stand-by Task Assignment for Kafka Streams

2022-03-21 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna resolved KAFKA-6718.
--
Resolution: Fixed

> Rack Aware Stand-by Task Assignment for Kafka Streams
> -
>
> Key: KAFKA-6718
> URL: https://issues.apache.org/jira/browse/KAFKA-6718
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Deepak Goyal
>Assignee: Levani Kokhreidze
>Priority: Major
>  Labels: kip
> Fix For: 3.2.0
>
>
> |Machines in data centre are sometimes grouped in racks. Racks provide 
> isolation as each rack may be in a different physical location and has its 
> own power source. When tasks are properly replicated across racks, it 
> provides fault tolerance in that if a rack goes down, the remaining racks can 
> continue to serve traffic.
>   
>  This feature is already implemented at Kafka 
> [KIP-36|https://cwiki.apache.org/confluence/display/KAFKA/KIP-36+Rack+aware+replica+assignment]
>  but we needed similar for task assignments at Kafka Streams Application 
> layer. 
>   
>  This features enables replica tasks to be assigned on different racks for 
> fault-tolerance.
>  NUM_STANDBY_REPLICAS = x
>  totalTasks = x+1 (replica + active)
>  # If there are no rackID provided: Cluster will behave rack-unaware
>  # If same rackId is given to all the nodes: Cluster will behave rack-unaware
>  # If (totalTasks <= number of racks), then Cluster will be rack aware i.e. 
> each replica task is each assigned to a different rack.
>  # Id (totalTasks > number of racks), then it will first assign tasks on 
> different racks, further tasks will be assigned to least loaded node, cluster 
> wide.|
> We have added another config in StreamsConfig called "RACK_ID_CONFIG" which 
> helps StickyPartitionAssignor to assign tasks in such a way that no two 
> replica tasks are on same rack if possible.
>  Post that it also helps to maintain stickyness with-in the rack.|



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-13043) Add Admin API for batched offset fetch requests (KIP-709)

2022-03-21 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna updated KAFKA-13043:
--
Fix Version/s: 3.3.0
   (was: 3.2.0)

> Add Admin API for batched offset fetch requests (KIP-709)
> -
>
> Key: KAFKA-13043
> URL: https://issues.apache.org/jira/browse/KAFKA-13043
> Project: Kafka
>  Issue Type: New Feature
>  Components: admin
>Affects Versions: 3.1.0, 3.0.0
>Reporter: Rajini Sivaram
>Assignee: Sanjana Kaundinya
>Priority: Major
> Fix For: 3.3.0
>
>
> Protocol changes and broker-side changes to process batched 
> OffsetFetchRequests were added under KAFKA-12234. This ticket is to add Admin 
> API changes to use this feature.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-12473) Make the "cooperative-sticky, range" as the default assignor

2022-03-21 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna updated KAFKA-12473:
--
Fix Version/s: 3.3.0
   (was: 3.2.0)

> Make the "cooperative-sticky, range" as the default assignor
> 
>
> Key: KAFKA-12473
> URL: https://issues.apache.org/jira/browse/KAFKA-12473
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: A. Sophie Blee-Goldman
>Assignee: Luke Chen
>Priority: Critical
>  Labels: kip
> Fix For: 3.3.0
>
>
> Now that 3.0 is coming up, we can change the default 
> ConsumerPartitionAssignor to something better than the RangeAssignor. The 
> original plan was to switch over to the StickyAssignor, but now that we have 
> incremental cooperative rebalancing we should  consider using the new 
> CooperativeStickyAssignor instead: this will enable the consumer group to 
> follow the COOPERATIVE protocol, improving the rebalancing experience OOTB.
> KIP: 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177048248



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-13479) Interactive Query v2

2022-03-21 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna updated KAFKA-13479:
--
Fix Version/s: 3.3.0
   (was: 3.2.0)

> Interactive Query v2
> 
>
> Key: KAFKA-13479
> URL: https://issues.apache.org/jira/browse/KAFKA-13479
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 3.3.0
>
>
> Kafka Streams supports an interesting and innovative API for "peeking" into 
> the internal state of running stateful stream processors from outside of the 
> application, called Interactive Query (IQ). This functionality has proven 
> invaluable to users over the years for everything from debugging running 
> applications to serving low latency queries straight from the Streams runtime.
> However, the actual interfaces for IQ were designed in the very early days of 
> Kafka Streams, before the project had gained significant adoption, and in the 
> absence of much precedent for this kind of API in peer projects. With the 
> benefit of hindsight, we can observe several problems with the original 
> design that we hope to address in a revised framework that will serve Streams 
> users well for many years to come.
>  
> This ticket tracks the implementation of KIP-796: 
> https://cwiki.apache.org/confluence/x/34xnCw



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-13217) Reconsider skipping the LeaveGroup on close() or add an overload that does so

2022-03-21 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna updated KAFKA-13217:
--
Fix Version/s: 3.3.0
   (was: 3.2.0)

> Reconsider skipping the LeaveGroup on close() or add an overload that does so
> -
>
> Key: KAFKA-13217
> URL: https://issues.apache.org/jira/browse/KAFKA-13217
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Assignee: Seungchan Ahn
>Priority: Major
>  Labels: needs-kip, newbie, newbie++
> Fix For: 3.3.0
>
>
> In Kafka Streams, when an instance is shut down via the close() API, we 
> intentionally skip sending a LeaveGroup request. This is because often the 
> shutdown is not due to a scaling down event but instead some transient 
> closure, such as during a rolling bounce. In cases where the instance is 
> expected to start up again shortly after, we originally wanted to avoid that 
> member's tasks from being redistributed across the remaining group members 
> since this would disturb the stable assignment and could cause unnecessary 
> state migration and restoration. We also hoped
> to limit the disruption to just a single rebalance, rather than forcing the 
> group to rebalance once when the member shuts down and then again when it 
> comes back up. So it's really an optimization  for the case in which the 
> shutdown is temporary.
>  
> That said, many of those optimizations are no longer necessary or at least 
> much less useful given recent features and improvements. For example 
> rebalances are now lightweight so skipping the 2nd rebalance is not as worth 
> optimizing for, and the new assignor will take into account the actual 
> underlying state for each task/partition assignment, rather than just the 
> previous assignment, so the assignment should be considerably more stable 
> across bounces and rolling restarts. 
>  
> Given that, it might be time to reconsider this optimization. Alternatively, 
> we could introduce another form of the close() API that forces the member to 
> leave the group, to be used in event of actual scale down rather than a 
> transient bounce.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-13410) KRaft Upgrades

2022-03-21 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna updated KAFKA-13410:
--
Fix Version/s: 3.3.0
   (was: 3.2.0)

> KRaft Upgrades
> --
>
> Key: KAFKA-13410
> URL: https://issues.apache.org/jira/browse/KAFKA-13410
> Project: Kafka
>  Issue Type: New Feature
>Reporter: David Arthur
>Assignee: David Arthur
>Priority: Major
> Fix For: 3.3.0
>
>
> This is the placeholder JIRA for KIP-778



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] cadonna opened a new pull request #11925: MINOR: Bump trunk to 3.3.0-SNAPSHOT

2022-03-21 Thread GitBox


cadonna opened a new pull request #11925:
URL: https://github.com/apache/kafka/pull/11925


   Version bumps on trunk following the creation of the 3.2 release branch.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dajac commented on a change in pull request #11912: KAFKA-13752: Uuid compare using equals in java

2022-03-21 Thread GitBox


dajac commented on a change in pull request #11912:
URL: https://github.com/apache/kafka/pull/11912#discussion_r831450841



##
File path: 
clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java
##
@@ -151,7 +151,7 @@ public Cluster buildCluster() {
 if (metadata.error == Errors.NONE) {
 if (metadata.isInternal)
 internalTopics.add(metadata.topic);
-if (metadata.topicId() != null && metadata.topicId() != 
Uuid.ZERO_UUID) {
+if (metadata.topicId() != null && 
!Uuid.ZERO_UUID.equals(metadata.topicId())) {

Review comment:
   @fxbing Could you add a unit test for this one as well?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] kkonstantine commented on pull request #11908: KAFKA-13748: Do not include file stream connectors in Connect's CLASSPATH and plugin.path by default

2022-03-21 Thread GitBox


kkonstantine commented on pull request #11908:
URL: https://github.com/apache/kafka/pull/11908#issuecomment-1074268711


   Here's a run of all the Connect system tests with the proposed changes in 
the PR. 
   
https://jenkins.confluent.io/view/All/job/system-test-kafka-branch-builder/4814/
   
   @rhauch the 9 tests that have failed have done so after loading the classes 
successfully. The failures are not relevant to the changes here and the issues 
with the `test_broker_compatibility` seems to make sense to address separately. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] lihaosky commented on a change in pull request #11829: [RFC][2/N][emit final] add processor metadata to be committed with offset

2022-03-21 Thread GitBox


lihaosky commented on a change in pull request #11829:
URL: https://github.com/apache/kafka/pull/11829#discussion_r831417722



##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/ProcessorMetadata.java
##
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.processor.internals;
+
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.utils.Bytes;
+
+/**
+ * ProcessorMetadata to be access and populated by processor node. This will 
be committed along with
+ * offset
+ */
+public class ProcessorMetadata {
+
+// Does this need to be thread safe? I think not since there's one per task
+private final Map globalMetadata;
+
+public ProcessorMetadata() {
+globalMetadata = new HashMap<>();
+}
+
+public static ProcessorMetadata deserialize(final byte[] 
ProcessorMetadata, final TopicPartition partition) {

Review comment:
   Yeah. `TopicPartition` can be dropped.

##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/ProcessorMetadata.java
##
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.processor.internals;
+
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.utils.Bytes;
+
+/**
+ * ProcessorMetadata to be access and populated by processor node. This will 
be committed along with
+ * offset
+ */
+public class ProcessorMetadata {
+
+// Does this need to be thread safe? I think not since there's one per task
+private final Map globalMetadata;

Review comment:
   > Why is this a key-value pair
   
   My original intention is to make it flexible so that even different 
processor can commit/store multiple key/value pairs. So the key/value could be 
anything the processor choose. Now I think maybe we could also add a namespace 
to prevent collision of key from different processor. Namespace can be 
processor name.
   
   > And why are the types 
   
   Why not ``? I think `Long` is not flexible enough. For emit 
final use case, `Long` is fine. 
   Why not ``? I think it's more steps for processor to 
serialize it to `String` in some cases. e.g. For `Long`, processor needs to 
serialize it to `byte[]` and then `String`? So why not pass `byte[]` directly 
and serialize them finally to `String` for `OffsetAndMetadata`?

##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/ProcessorMetadata.java
##
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations und

[jira] [Commented] (KAFKA-13739) Sliding window without grace not working

2022-03-21 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17510043#comment-17510043
 ] 

Matthias J. Sax commented on KAFKA-13739:
-

Thank you! – Let us know if you need any help.

> Sliding window without grace not working
> 
>
> Key: KAFKA-13739
> URL: https://issues.apache.org/jira/browse/KAFKA-13739
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.1.0
>Reporter: bounkong khamphousone
>Priority: Minor
>  Labels: beginner, newbie
>
> Hi everyone! I would like to understand why KafkaStreams DSL offer the 
> ability to express a SlidingWindow with no grace period but seems that it 
> doesn't work. [confluent's 
> site|https://docs.confluent.io/platform/current/streams/developer-guide/dsl-api.html#sliding-time-windows]
>  state that grace period is required and with the deprecated method, it's 
> default to 24 hours.
> Doing a basic sliding window with a count, if I set grace period to 1 ms, 
> expected output is done. Based on the sliding window documentation, lower and 
> upper bounds are inclusive.
> If I set grace period to 0 ms, I can see that record is not skipped at 
> KStreamSlidingWindowAggregate(l.126) but when we try to create the window and 
> push the event in KStreamSlidingWindowAggregate#createWindows we call the 
> method updateWindowAndForward(l.417). This method (l.468) check that 
> {{{}windowEnd > closeTime{}}}.
> closeTime is defined as {{observedStreamTime - window.gracePeriodMs}} 
> (Sliding window configuration)
> windowEnd is defined as {{{}inputRecordTimestamp{}}}.
>  
> For a first event with a record timestamp, we can assume that 
> observedStreamTime is equal to inputRecordTimestamp.
>  
> Therefore, closeTime is {{inputRecordTimestamp - 0}} (gracePeriodMS) which 
> results to {{{}inputRecordTimestamp{}}}.
> If we go back to the check done in {{updateWindowAndForward}} method, then we 
> have inputRecordTimestamp > inputRecordTimestamp which is always false. The 
> record is then skipped for record's own window.
> Stating that lower and upper bounds are inclusive, I would have expected the 
> event to be pushed in the store and forwarded. Hence, the check would be 
> {{{}windowEnd >= closeTime{}}}.
>  
> Is it a bug or is it intended ?
> Thanks in advance for your explanations!
> Best regards!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] dajac commented on pull request #11173: KAFKA-13509: Support max timestamp in GetOffsetShell

2022-03-21 Thread GitBox


dajac commented on pull request #11173:
URL: https://github.com/apache/kafka/pull/11173#issuecomment-1074212093


   @dengziming I just say this 
[failure](https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-11900/4/tests)
 which seems related to changes that we have done here. Could you take a look 
please?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (KAFKA-13728) PushHttpMetricsReporter no longer pushes metrics when network failure is recovered.

2022-03-21 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-13728.
---
Fix Version/s: 3.2.0
   Resolution: Fixed

> PushHttpMetricsReporter no longer pushes metrics when network failure is 
> recovered.
> ---
>
> Key: KAFKA-13728
> URL: https://issues.apache.org/jira/browse/KAFKA-13728
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.1.0
>Reporter: XiaoyiPeng
>Priority: Minor
> Fix For: 3.2.0
>
>
> The class *PushHttpMetricsReporter* no longer pushes metrics when network 
> failure is recovered.
> I debugged the code and found the problem here :
> [https://github.com/apache/kafka/blob/dc36dedd28ff384218b669de13993646483db966/tools/src/main/java/org/apache/kafka/tools/PushHttpMetricsReporter.java#L214-L221]
>  
> When we submit a task to the *ScheduledThreadPoolExecutor* that needs to be 
> executed periodically, if the task throws an exception and is not swallowed, 
> the task will no longer be scheduled to execute.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] junrao commented on pull request #11842: KAFKA-13687: Limiting the amount of bytes to be read in a segment logs

2022-03-21 Thread GitBox


junrao commented on pull request #11842:
URL: https://github.com/apache/kafka/pull/11842#issuecomment-1074159585


   @sciclon2 : Sorry, I missed the KIP in the summary. We just need to get the 
KIP approved.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma edited a comment on pull request #11475: KAFKA-7077: Use default producer settings in Connect Worker

2022-03-21 Thread GitBox


ijuma edited a comment on pull request #11475:
URL: https://github.com/apache/kafka/pull/11475#issuecomment-1074116541


   @showuon We should add a note to `upgrade.html` for this (and similar types 
of changes in the future).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #11475: KAFKA-7077: Use default producer settings in Connect Worker

2022-03-21 Thread GitBox


ijuma commented on pull request #11475:
URL: https://github.com/apache/kafka/pull/11475#issuecomment-1074116541


   @showuon We should add a note to `upgrade.html` for this (and similar types 
of changes).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #11475: KAFKA-7077: Use default producer settings in Connect Worker

2022-03-21 Thread GitBox


ijuma commented on pull request #11475:
URL: https://github.com/apache/kafka/pull/11475#issuecomment-1074114315


   @kkonstantine @rhauch Is it OK to enable idempotence by default in this 
case? Are there any compatibility or behavior concerns?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] idank commented on a change in pull request #11900: MINOR: Fix class comparison in `AlterConfigPolicy.RequestMetadata.equals()`

2022-03-21 Thread GitBox


idank commented on a change in pull request #11900:
URL: https://github.com/apache/kafka/pull/11900#discussion_r831257729



##
File path: 
clients/src/main/java/org/apache/kafka/server/policy/AlterConfigPolicy.java
##
@@ -71,7 +71,7 @@ public int hashCode() {
 
 @Override
 public boolean equals(Object o) {
-if (o == null || o.getClass() != o.getClass()) return false;
+if ((o == null) || (!o.getClass().equals(getClass( return 
false;

Review comment:
   Not at all, thanks!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mimaison commented on pull request #11906: MINOR: Doc updates for Kafka 3.0.1

2022-03-21 Thread GitBox


mimaison commented on pull request #11906:
URL: https://github.com/apache/kafka/pull/11906#issuecomment-1074007613


   @dajac It was relatively simple to port docs changes from kafka/3.0 since 
3.0.0. However because we often merge changes only in kafka-site, it's harder 
to go the other way around.
   
   Considering we're unlikely to release 3.0.2, I'm not sure it's worth the 
effort to do it in 3.0.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dongjinleekr commented on pull request #10244: KAFKA-12399: Deprecate Log4J Appender

2022-03-21 Thread GitBox


dongjinleekr commented on pull request #10244:
URL: https://github.com/apache/kafka/pull/10244#issuecomment-1073969969


   @mimaison Here is the update, with a migration guide! After some trial and 
error, I concluded that a full comparison table would help.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dongjinleekr removed a comment on pull request #10244: KAFKA-12399: Deprecate Log4J Appender

2022-03-21 Thread GitBox


dongjinleekr removed a comment on pull request #10244:
URL: https://github.com/apache/kafka/pull/10244#issuecomment-1073969635


   @mimaison Here is the update, with a migration guide! After some trial and 
error, I decided that a full comparison table would help.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dongjinleekr commented on pull request #10244: KAFKA-12399: Deprecate Log4J Appender

2022-03-21 Thread GitBox


dongjinleekr commented on pull request #10244:
URL: https://github.com/apache/kafka/pull/10244#issuecomment-1073969635


   @mimaison Here is the update, with a migration guide! After some trial and 
error, I decided that a full comparison table would help.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (KAFKA-13755) Broker heartbeat event should have deadline

2022-03-21 Thread David Arthur (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Arthur reassigned KAFKA-13755:


Assignee: (was: David Arthur)

> Broker heartbeat event should have deadline
> ---
>
> Key: KAFKA-13755
> URL: https://issues.apache.org/jira/browse/KAFKA-13755
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Reporter: David Arthur
>Priority: Minor
>  Labels: kip-500
> Fix For: 3.2.0
>
>
> When we schedule the event for processing the broker heartbeat request in 
> QuroumController, we do not give a deadline. This means that the event will 
> only be processed after all other events which do have a deadline. In the 
> case of the controller's queue getting filled up with deadline (i.e., 
> "deferred") events, we may not process the heartbeat before the broker 
> attempts to send another one.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] sciclon2 edited a comment on pull request #11842: KAFKA-13687: Limiting the amount of bytes to be read in a segment logs

2022-03-21 Thread GitBox


sciclon2 edited a comment on pull request #11842:
URL: https://github.com/apache/kafka/pull/11842#issuecomment-1073936798


   hi @junrao ,
   
   I did a 
[KIP](https://cwiki.apache.org/confluence/display/KAFKA/KIP-824%3A+Allowing+dumping+segmentlogs+limiting+the+batches+in+the+output)
 before this PR,  it is mentioned in the summary, am I missing anything else? 
also I create the [discussion 
channel](https://www.mail-archive.com/dev@kafka.apache.org/msg123564.html) and 
the [jira](https://issues.apache.org/jira/browse/KAFKA-13687) ticket.
   
   Thankls


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] sciclon2 edited a comment on pull request #11842: KAFKA-13687: Limiting the amount of bytes to be read in a segment logs

2022-03-21 Thread GitBox


sciclon2 edited a comment on pull request #11842:
URL: https://github.com/apache/kafka/pull/11842#issuecomment-1073936798


   hi @junrao ,
   
   I did a KIP before this PR,  it is mentioned in the summary, am I missing 
anything else? also I create the discussion channel and the jira ticket.
   
   Thankls


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mimaison commented on pull request #11922: Update upgrade.html for 3.1.1

2022-03-21 Thread GitBox


mimaison commented on pull request #11922:
URL: https://github.com/apache/kafka/pull/11922#issuecomment-1073936914


   Should we also update `quickstart.html`?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] sciclon2 commented on pull request #11842: KAFKA-13687: Limiting the amount of bytes to be read in a segment logs

2022-03-21 Thread GitBox


sciclon2 commented on pull request #11842:
URL: https://github.com/apache/kafka/pull/11842#issuecomment-1073936798


   hi @junrao ,
   
   I did a KIP before this PR,  it is mentioned in the summary, am I missing 
anything else? also I create the discussion channel and the jira ticket


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dajac commented on a change in pull request #11900: MINOR: Fix class comparison in `AlterConfigPolicy.RequestMetadata.equals()`

2022-03-21 Thread GitBox


dajac commented on a change in pull request #11900:
URL: https://github.com/apache/kafka/pull/11900#discussion_r831102681



##
File path: 
clients/src/main/java/org/apache/kafka/server/policy/AlterConfigPolicy.java
##
@@ -71,7 +71,7 @@ public int hashCode() {
 
 @Override
 public boolean equals(Object o) {
-if (o == null || o.getClass() != o.getClass()) return false;
+if ((o == null) || (!o.getClass().equals(getClass( return 
false;

Review comment:
   I don't feel strong about the `!=` vs. `equals`. We can keep it as you 
did it. However, I just took the liberty to push a small unit test to your 
branch. I hope you don't mind.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ddrid opened a new pull request #11924: MINOR: fix typos in TransactionManager.java

2022-03-21 Thread GitBox


ddrid opened a new pull request #11924:
URL: https://github.com/apache/kafka/pull/11924


   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] lkokhreidze commented on pull request #11923: KAFKA-6718 / Documentation

2022-03-21 Thread GitBox


lkokhreidze commented on pull request #11923:
URL: https://github.com/apache/kafka/pull/11923#issuecomment-1073821545


   Call for review @cadonna @showuon 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] lkokhreidze commented on a change in pull request #11923: KAFKA-6718 / Documentation

2022-03-21 Thread GitBox


lkokhreidze commented on a change in pull request #11923:
URL: https://github.com/apache/kafka/pull/11923#discussion_r831038617



##
File path: clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java
##
@@ -33,8 +31,10 @@
 import java.util.Map;
 import java.util.Set;
 import java.util.function.BiConsumer;
+import java.util.function.Function;
 import java.util.function.Supplier;
 import java.util.regex.Pattern;
+import java.util.stream.Collectors;

Review comment:
   Sorry about this. Accidentally formatted with idea :( 

##
File path: clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java
##
@@ -33,8 +31,10 @@
 import java.util.Map;
 import java.util.Set;
 import java.util.function.BiConsumer;
+import java.util.function.Function;
 import java.util.function.Supplier;
 import java.util.regex.Pattern;
+import java.util.stream.Collectors;

Review comment:
   Sorry about this. Accidentally formatted imports :( 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] lkokhreidze commented on a change in pull request #11923: KAFKA-6718 / Documentation

2022-03-21 Thread GitBox


lkokhreidze commented on a change in pull request #11923:
URL: https://github.com/apache/kafka/pull/11923#discussion_r831038170



##
File path: clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java
##
@@ -1140,6 +1140,11 @@ public void ensureValid(final String name, final Object 
value) {
 throw new ConfigException(name, value, "exceeds maximum list 
size of [" + maxSize + "].");
 }
 }
+
+@Override
+public String toString() {
+return "List containing maximum of " + maxSize + " elements";

Review comment:
   This message is included in configuration's `Valid values` paragraph.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] lkokhreidze opened a new pull request #11923: KAFKA-6718 / Documentation

2022-03-21 Thread GitBox


lkokhreidze opened a new pull request #11923:
URL: https://github.com/apache/kafka/pull/11923


   Validated by running kafka-site locally as described in 
[here](https://cwiki.apache.org/confluence/display/KAFKA/Setup+Kafka+Website+on+Local+Apache+Server).
   
   Screenshots
   https://user-images.githubusercontent.com/8927925/159258573-7b908ba7-6954-4587-9dcb-426b338ba5d2.png";>
   
   https://user-images.githubusercontent.com/8927925/159258584-5f883f08-8cdb-4801-9770-d05c50e5fad0.png";>
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] idank commented on a change in pull request #11900: Fix class comparison in equals()

2022-03-21 Thread GitBox


idank commented on a change in pull request #11900:
URL: https://github.com/apache/kafka/pull/11900#discussion_r831023106



##
File path: 
clients/src/main/java/org/apache/kafka/server/policy/AlterConfigPolicy.java
##
@@ -71,7 +71,7 @@ public int hashCode() {
 
 @Override
 public boolean equals(Object o) {
-if (o == null || o.getClass() != o.getClass()) return false;
+if ((o == null) || (!o.getClass().equals(getClass( return 
false;

Review comment:
   
`clients/src/main/java/org/apache/kafka/common/protocol/types/RawTaggedField.java`
 is doing the same thing fwiw. I don't have a strong preference other than I 
don't have the code checked out and it takes O(hours) to clone the repo. If 
you're fine either way or want to take this fix yourself, I don't mind.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] edoardocomar commented on pull request #7898: KAFKA-9366: Change log4j dependency into log4j2

2022-03-21 Thread GitBox


edoardocomar commented on pull request #7898:
URL: https://github.com/apache/kafka/pull/7898#issuecomment-1073801293


   Hi @dongjinleekr would you consider this patch to fix the compilation error 
   
   ```
   diff --git 
streams/src/test/java/org/apache/kafka/streams/StreamsConfigTest.java 
streams/src/test/java/org/apache/kafka/streams/StreamsConfigTest.java
   index 2bfbe4e36b..3cbb5c6369 100644
   --- streams/src/test/java/org/apache/kafka/streams/StreamsConfigTest.java
   +++ streams/src/test/java/org/apache/kafka/streams/StreamsConfigTest.java
   @@ -1039,7 +1039,9 @@ public class StreamsConfigTest {
@Test
public void shouldSpecifyRocksdbWhenNotExplicitlyAddedToConfigs() {
final String expectedDefaultStoreType = StreamsConfig.ROCKS_DB;
   -final String actualDefaultStoreType = 
streamsConfig.getString(DEFAULT_DSL_STORE_CONFIG);
   +props.put(DEFAULT_DSL_STORE_CONFIG, expectedDefaultStoreType);
   +final StreamsConfig config = new StreamsConfig(props);
   +final String actualDefaultStoreType = 
config.getString(DEFAULT_DSL_STORE_CONFIG);
assertEquals("default.dsl.store should be \"rocksDB\"", 
expectedDefaultStoreType, actualDefaultStoreType);
}
   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] tombentley commented on pull request #11922: Update upgrade.html for 3.1.1

2022-03-21 Thread GitBox


tombentley commented on pull request #11922:
URL: https://github.com/apache/kafka/pull/11922#issuecomment-1073789686


   @mimaison please could you review?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] erikgb commented on pull request #11916: Allow unencrypted private keys when using PEM files

2022-03-21 Thread GitBox


erikgb commented on pull request #11916:
URL: https://github.com/apache/kafka/pull/11916#issuecomment-1073784887


   I think the requirement for encrypted private key when using keystores (JKS, 
PKCS12) originates from a limitation in Java. So while consistency in general 
makes sense, I think it also makes sense to move forward considering the 
adoption of cloud-based applications when introducing a new feature/format.
   
   But to get a wider discussion of this makes sense.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-13756) Connect validate endpoint should return proper response for invalid connector class

2022-03-21 Thread Daniel Urban (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Urban updated KAFKA-13756:
-
Description: 
Currently, if there is an issue with  the connector class, the validate 
endpoint returns a 400 or a 500 response.

Instead, it should return a well formatted response containing a proper 
validation error message.

  was:
Currently, if there is an issue with the connector name or the connector class, 
the validate endpoint returns a 500 response.

Instead, it should return a well formatted response containing proper 
validation error messages.


> Connect validate endpoint should return proper response for invalid connector 
> class
> ---
>
> Key: KAFKA-13756
> URL: https://issues.apache.org/jira/browse/KAFKA-13756
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Daniel Urban
>Priority: Major
>
> Currently, if there is an issue with  the connector class, the validate 
> endpoint returns a 400 or a 500 response.
> Instead, it should return a well formatted response containing a proper 
> validation error message.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] dajac commented on a change in pull request #11900: Fix class comparison in equals()

2022-03-21 Thread GitBox


dajac commented on a change in pull request #11900:
URL: https://github.com/apache/kafka/pull/11900#discussion_r830999098



##
File path: 
clients/src/main/java/org/apache/kafka/server/policy/AlterConfigPolicy.java
##
@@ -71,7 +71,7 @@ public int hashCode() {
 
 @Override
 public boolean equals(Object o) {
-if (o == null || o.getClass() != o.getClass()) return false;
+if ((o == null) || (!o.getClass().equals(getClass( return 
false;

Review comment:
   Oh.. I missed that. It thought it was only about using `equals`. Could 
we keep using `!=` instead of `equals` though? It might be worth adding a small 
unit test for this method, what do you think? That would avoid repeating such 
mistake.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-13756) Connect validate endpoint should return proper response for invalid connector class

2022-03-21 Thread Daniel Urban (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Urban updated KAFKA-13756:
-
Summary: Connect validate endpoint should return proper response for 
invalid connector class  (was: Connect validate endpoint should return proper 
response on name and connector class error)

> Connect validate endpoint should return proper response for invalid connector 
> class
> ---
>
> Key: KAFKA-13756
> URL: https://issues.apache.org/jira/browse/KAFKA-13756
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Daniel Urban
>Priority: Major
>
> Currently, if there is an issue with the connector name or the connector 
> class, the validate endpoint returns a 500 response.
> Instead, it should return a well formatted response containing proper 
> validation error messages.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] dajac commented on pull request #11916: Allow unencrypted private keys when using PEM files

2022-03-21 Thread GitBox


dajac commented on pull request #11916:
URL: https://github.com/apache/kafka/pull/11916#issuecomment-1073769893


   Thanks for the clarification @rajinisivaram. That makes sense to me. @erikgb 
@chromy96 Do you mind bringing this discussion to the KIP (KIP-651) discussion 
thread?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] idank commented on a change in pull request #11900: Fix class comparison in equals()

2022-03-21 Thread GitBox


idank commented on a change in pull request #11900:
URL: https://github.com/apache/kafka/pull/11900#discussion_r830989398



##
File path: 
clients/src/main/java/org/apache/kafka/server/policy/AlterConfigPolicy.java
##
@@ -71,7 +71,7 @@ public int hashCode() {
 
 @Override
 public boolean equals(Object o) {
-if (o == null || o.getClass() != o.getClass()) return false;
+if ((o == null) || (!o.getClass().equals(getClass( return 
false;

Review comment:
   Disregard that, I read your comment wrong. The previous behavior was 
comparing the incoming object against itself. So not the same.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] idank commented on a change in pull request #11900: Fix class comparison in equals()

2022-03-21 Thread GitBox


idank commented on a change in pull request #11900:
URL: https://github.com/apache/kafka/pull/11900#discussion_r830985133



##
File path: 
clients/src/main/java/org/apache/kafka/server/policy/AlterConfigPolicy.java
##
@@ -71,7 +71,7 @@ public int hashCode() {
 
 @Override
 public boolean equals(Object o) {
-if (o == null || o.getClass() != o.getClass()) return false;
+if ((o == null) || (!o.getClass().equals(getClass( return 
false;

Review comment:
   I took this from somewhere else in the code base, so not sure honestly. 
Should I replace this with ==?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rajinisivaram commented on pull request #11916: Allow unencrypted private keys when using PEM files

2022-03-21 Thread GitBox


rajinisivaram commented on pull request #11916:
URL: https://github.com/apache/kafka/pull/11916#issuecomment-1073729540


   With other file formats for key stores (JKS, PKCS12), I don't think we 
currently allow unencrypted keys. So for PEM, it made sense to keep the 
requirements for secure files consistent. For PEM in string format that is set 
directly as a config, we treat strings similar to other password configs. 
Configs can be externalized in this case (e.g. stored in Vault) and hence we 
don't require separate encryption. I think it may be better to add a note to 
the KIP discussion thread to get wider opinion on relaxing the requirement for 
PEM files.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] RivenSun2 commented on pull request #11919: MINOR: Unify the log output of JaasContext.defaultContext

2022-03-21 Thread GitBox


RivenSun2 commented on pull request #11919:
URL: https://github.com/apache/kafka/pull/11919#issuecomment-1073729455


   Hi @dajac Thank you for your reply.
   You are correct that we should not modify the log output in `defaultContext` 
method.
   But we still ignore that in the `loadServerContext` method, there is no log 
output when both `dynamicJaasConfig` and 
`configs.get(SaslConfigs.SASL_JAAS_CONFIG)` are null, and debugLog should be 
reminded.
   `LOG.debug("Kafka Server SASL property 'sasl.jaas.config' is not set");`
   
   WDYT?
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dajac merged pull request #11921: MINOR: Small cleanups in the AclAuthorizer

2022-03-21 Thread GitBox


dajac merged pull request #11921:
URL: https://github.com/apache/kafka/pull/11921


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-13756) Connect validate endpoint should return proper response on name and connector class error

2022-03-21 Thread Daniel Urban (Jira)
Daniel Urban created KAFKA-13756:


 Summary: Connect validate endpoint should return proper response 
on name and connector class error
 Key: KAFKA-13756
 URL: https://issues.apache.org/jira/browse/KAFKA-13756
 Project: Kafka
  Issue Type: Improvement
Reporter: Daniel Urban


Currently, if there is an issue with the connector name or the connector class, 
the validate endpoint returns a 500 response.

Instead, it should return a well formatted response containing proper 
validation error messages.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] dajac commented on pull request #11916: Allow unencrypted private keys when using PEM files

2022-03-21 Thread GitBox


dajac commented on pull request #11916:
URL: https://github.com/apache/kafka/pull/11916#issuecomment-1073669836


   @erikgb Thanks for your comment. I tend to agree with you. It would be good 
to have @rajinisivaram's opinion here as she wrote that part of the code.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dajac commented on a change in pull request #11900: Fix class comparison in equals()

2022-03-21 Thread GitBox


dajac commented on a change in pull request #11900:
URL: https://github.com/apache/kafka/pull/11900#discussion_r830885264



##
File path: 
clients/src/main/java/org/apache/kafka/server/policy/AlterConfigPolicy.java
##
@@ -71,7 +71,7 @@ public int hashCode() {
 
 @Override
 public boolean equals(Object o) {
-if (o == null || o.getClass() != o.getClass()) return false;
+if ((o == null) || (!o.getClass().equals(getClass( return 
false;

Review comment:
   As far as I know, `Class` does not override `equals` so the behavior is 
similar to the previous one. Am I missing something?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] vamossagar12 commented on pull request #11796: KAFKA-13152: Replace "buffered.records.per.partition" with "input.buffer.max.bytes"

2022-03-21 Thread GitBox


vamossagar12 commented on pull request #11796:
URL: https://github.com/apache/kafka/pull/11796#issuecomment-1073657931


   > @vamossagar12 could you resolve the conflicts before I re-trigger jenkins 
again?
   
   @guozhangwang done. On my local, only one test failed in streams which is => 
org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest#shouldRestoreState


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on pull request #11888: MINOR: Pass materialized to the inner KTable instance

2022-03-21 Thread GitBox


showuon commented on pull request #11888:
URL: https://github.com/apache/kafka/pull/11888#issuecomment-1073654080


   @msigmond , thanks for your contribution!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon merged pull request #11888: MINOR: Pass materialized to the inner KTable instance

2022-03-21 Thread GitBox


showuon merged pull request #11888:
URL: https://github.com/apache/kafka/pull/11888


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] erikgb commented on pull request #11916: Allow unencrypted private keys when using PEM files

2022-03-21 Thread GitBox


erikgb commented on pull request #11916:
URL: https://github.com/apache/kafka/pull/11916#issuecomment-1073653291


   > @rajinisivaram Is there always a reason to require the key password?
   
   @dajac At least I do not think there is a good reason for it. 😸  We have 
found a [related comment in the test 
code](https://github.com/apache/kafka/blob/3.1/clients/src/test/java/org/apache/kafka/common/network/SslTransportLayerTest.java#L493-L495).
 IMO Kafka should not **enforce** best practices. It would be sufficient to 
_recommend_ encrypted private keys.
   
   This issue is currently blocking us from using PEM-formatted certificates 
issued by [cert-manager](https://github.com/cert-manager/cert-manager) with 
Kafka - since cert-manager does not support encrypted private keys.
   
   I asked one of the main cert-manager maintainers about this, and got the 
following comments:
   
   > There is a benefit to encrypting this stuff if the cert is being persisted 
to disk and if the decryption key isn't written to the same disk, which is 
probably a pretty common setup in a lot of older non-cloud systems. The issue 
in a lot of more modern or cloud environments - especially in k8s - is that 
certs are ideally never going to be written to disk and wherever they are 
stored, they'd likely be stored next to their decryption keys.
   
   > So I guess everyone's right - in some environments encrypting the key is 
probably worth it, and in some it's a false sense of security and wasted CPU 
cycles. That to me means that being able to choose to not encrypt the key is 
desirable.
   (This is assuming that the encryption is actually secure, which might not be 
true for all methods of encrypting private keys)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dajac commented on a change in pull request #11919: MINOR: Unify the log output of JaasContext.defaultContext

2022-03-21 Thread GitBox


dajac commented on a change in pull request #11919:
URL: https://github.com/apache/kafka/pull/11919#discussion_r830873954



##
File path: 
clients/src/main/java/org/apache/kafka/common/security/JaasContext.java
##
@@ -100,13 +100,8 @@ private static JaasContext defaultContext(JaasContext.Type 
contextType, String l
   String globalContextName) {
 String jaasConfigFile = 
System.getProperty(JaasUtils.JAVA_LOGIN_CONFIG_PARAM);
 if (jaasConfigFile == null) {
-if (contextType == Type.CLIENT) {
-LOG.debug("System property '" + 
JaasUtils.JAVA_LOGIN_CONFIG_PARAM + "' and Kafka SASL property '" +
-SaslConfigs.SASL_JAAS_CONFIG + "' are not set, using 
default JAAS configuration.");
-} else {
-LOG.debug("System property '" + 
JaasUtils.JAVA_LOGIN_CONFIG_PARAM + "' is not set, using default JAAS " +
-"configuration.");

Review comment:
   I am not entirely sure about this. I think that `SASL_JAAS_CONFIG` 
should be prefixed with the SASL mechanism otherwise we ignore it. In this 
case, we log a warning in `loadServerContext`. I suppose this is the reason why 
we don't use a generic message for both cases here. It does not make sense to 
say that `SASL_JAAS_CONFIG`  is not set if we don't use it without the prefix.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon merged pull request #11898: KAFKA-7540: commit offset sync before close

2022-03-21 Thread GitBox


showuon merged pull request #11898:
URL: https://github.com/apache/kafka/pull/11898


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on pull request #11898: KAFKA-7540: commit offset sync before close

2022-03-21 Thread GitBox


showuon commented on pull request #11898:
URL: https://github.com/apache/kafka/pull/11898#issuecomment-1073644477


   Failed tests are unrelated.
   ```
   Build / JDK 17 and Scala 2.13 / 
org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest.shouldRestoreState
   Build / JDK 17 and Scala 2.13 / 
org.apache.kafka.streams.integration.StandbyTaskCreationIntegrationTest.shouldCreateStandByTasksForMaterializedAndOptimizedSourceTables
   Build / JDK 17 and Scala 2.13 / 
org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQueryAllStalePartitionStores
   Build / JDK 17 and Scala 2.13 / 
org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQueryStoresAfterAddingAndRemovingStreamThread
   Build / JDK 11 and Scala 2.13 / 
org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest.shouldRestoreState
   Build / JDK 11 and Scala 2.13 / 
org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest.shouldRestoreState
   Build / JDK 8 and Scala 2.12 / 
org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest.shouldRestoreState
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dajac commented on pull request #11916: Allow unencrypted private keys when using PEM files

2022-03-21 Thread GitBox


dajac commented on pull request #11916:
URL: https://github.com/apache/kafka/pull/11916#issuecomment-1073640141


   @rajinisivaram Is there always a reason to require the key password?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dajac opened a new pull request #11921: MINOR: Small cleanups in the AclAuthorizer

2022-03-21 Thread GitBox


dajac opened a new pull request #11921:
URL: https://github.com/apache/kafka/pull/11921


   I was reading the AclAuthorizer and I made a few small cleanups.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org